Skip to content

Commit

Permalink
Merge branch 'master' of github.com:philipdelff/NMdata
Browse files Browse the repository at this point in the history
  • Loading branch information
philipdelff committed Oct 19, 2024
2 parents 325a686 + 9026b1a commit 4390c98
Show file tree
Hide file tree
Showing 80 changed files with 533 additions and 1,916 deletions.
11 changes: 11 additions & 0 deletions .Rbuildignore
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,20 @@ input.shk
INTER
LINK.LNK
README.html
inst/examples/nonmem/*.msf
inst/examples/nonmem/*.xml
inst/examples/nonmem/xgxr002.*
;;; used in vignettes/NMscanData.Rmd
;; inst/examples/nonmem/xgxr014.*
tests/testthat/testOutput/simulations/nonmem
tests/testthat/testOutput/simulations/NMsimData_xgxr.+\.rds
tests/testthat/testData/simulations/xgxr..._subprobs/NMsim_xgxr..._subprobs_dir.+
;; tests/testthat/testData/nonmem/xgxr005.*
;; tests/testthat/testData/nonmem/xgxr017.*
;; tests/testthat/testData/nonmem/xgxr018.*
;; tests/testthat/testData/nonmem/xgxr020.*
;; tests/testthat/testData/nonmem/xgxr021.*
tests/testthat/testData/simulations/xgxr.*/.*.rds
tests/testthat/testData/simulations/xgxr.*/.*.csv

.*_input.rds
4 changes: 2 additions & 2 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Package: NMdata
Type: Package
Title: Preparation, Checking and Post-Processing Data for PK/PD Modeling
Version: 0.1.6.923
Version: 0.1.7.905
Authors@R:
c(person(given="Philip", family="Delff",
email = "philip@delff.dk",
Expand Down Expand Up @@ -33,6 +33,6 @@ Suggests:
htmltools,
spelling
Encoding: UTF-8
URL: https://philipdelff.github.io/NMdata/
BugReports: https://github.com/philipdelff/NMdata/issues
Language: en-US
URL: https://philipdelff.github.io/NMdata/
103 changes: 86 additions & 17 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,30 @@
# NMdata 0.1.8

# 0.1.7
## New features

* If truncating columns in the csv file `NMwriteData()` accepts data
with commas in values, even when writing to csv files.

* `NMscanTables()` includes model name in meta data table. Useful for
generation of overviews of output tables from multiple models.

## Bugfixes

* Support for data file names including substrings "ACCEPT" and "IGN"
is added. Before, such data set file names could lead to failure if
interpreting data subsetting filters (ACCEPT and IGN(ORE)) in Nonmem
control streams.

## Other improvements

* NMscanMultiple would sometimes print a bit of a messy overview of
the results. That has been fixed without implications on the results
returned.

* dt2mat() now returns actual matrix objects. This provides
compatibility with the simpar package.

# NMdata 0.1.7

## New features
* `NMreadPartab()` has been generalized to support comment formats very
Expand All @@ -9,7 +34,7 @@
any structure should be supported as long as delimitors are not
alphabetic or numeric (so any special characters should
work). Notice, delimitors can change between fields . Example:
"$THETA 1.4 ; 3 - CL (Clearance) [L/h]" would be matched by
`"$THETA 1.4 ; 3 - CL (Clearance) [L/h]"` would be matched by
`NMreadPartab(...,format="%init ;%idx-%symbol(%label)[%unit]")`
which would then return a table including columns init, idx, symbol,
label, and unit. The comments must be systematic within say `$THETA`
Expand All @@ -22,8 +47,40 @@
that references THETA(1), `NMrelate()` will return a label
`TVCL`.

* `mergeCheck()` has additional features available in the common.cols
argument.
* Improved support for character-coded `TIME` and `DATE`
arguments. The default behavior is to allow (not require) `TIME` and
`DATE` columns to be non-numeric. This is to support the Nonmem
character format of DATE and TIME. It affects sorting of columns
(`NMorderColumn()`) and the auto-generated `$INPUT` section
suggestions. Where applicable, the `allow.char.TIME` argument
controls this behavior. Set to `allow.char.TIME=FALSE` to require
`TIME` and `DATE` columns be numeric. Thanks to Sanaya Shroff for
the request, enabling `NMsim` to simulate using data sets with one
or more of these columns coded as character.

* `mergeCheck(x,y)` has new options for handling common columns in
data sets. The `common.cols` argument replaces `fun.commoncols` with
added functionality.

- `common.cols="merge.by"` to include them in by, even
if they are not provided in the `by` argument.

- `common.cols="drop.x"` to drop the columns on the `x` and
overwrite with columns in y

- `common.cols="drop.y"` to preserve in `x`


- `base::stop` The default value. Throw an error if common.columns
are not included in merge `by` options.

- `common.cols=NULL` disabled handling and return columns as ".x"
and ".y".

- Any function. `common.cols=warning` will issue a warning instead
of throwing an error.



* `NMreadExt()` separates objective function values into a separate list
element. The `return` argument is used to control what data to
Expand All @@ -32,7 +89,9 @@
objective funtion value, or "all" for a list with all of those.

* `NMreadExt()` adds block information to `OMEGA` and `SIGMA` elements
based on off-diagonal values.
based on off-diagonal values. `iblock` identifies which block the
element is in. `blocksize` is the size of the block the element is
in. Thank you Brian Reilly for contributing to this.

* `NMreadExt()` adds a `par.name` column which is provides consistent
parameter naming. Instead of Nonmem's `THETA1` which is found in the
Expand Down Expand Up @@ -61,14 +120,16 @@
* `NMreadExt()` would mess up iterations and parameter estimates if
`as.fun` was set to returning something else than `data.table`s. Fixed.

# 0.1.6


# NMdata 0.1.6

## New features

* Function `NMreadShk()` to read and format `.shk` (shrinkage) files.

* Functions `mat2dt()` and `dt2mat()` included to convert between
matrices and data.frame format of matrix data - especially for
matrices and `data.frame` format of matrix data - especially for
symmetric matrices.

* Function `addOmegaCorr()` adds estimated correlation between ETAs to
Expand All @@ -87,7 +148,7 @@
`ADDL` and `II`) followed by other doses. Fixed. Thanks to Simone
Cassani for catching it.

# 0.1.5
# NMdata 0.1.5
## New features
* `countFlags` no longer needs a table of flags. By default it will
summarize the ones found in data. If additional flags wanted in
Expand Down Expand Up @@ -121,7 +182,9 @@
so). Now `NMdataConf(reset=TRUE)` makes sure to wipe all such
configuration if exists.

# 0.1.4


# NMdata 0.1.4

## New functions
* `NMreadParsText()` is a new function to extract comments to
Expand Down Expand Up @@ -157,7 +220,7 @@ tables.
* `NMreadText` would fail to disregard some comment lines when
`keep.comments=FALSE`. Fixed.

# 0.1.3
# NMdata 0.1.3
* Better support for models with multiple estimation
steps. Particularly reading output tables now better distinguishes
between Nonmem table numbers and repetitions (like
Expand All @@ -167,7 +230,7 @@ tables.
* Improved support for reading multiple models with NMreadExt and
NMreadPhi.

# 0.1.2
# NMdata 0.1.2
## New features
* NMreadExt is a new function that reads parameter estimates,
uncertainties if available, estimation iterations and other
Expand All @@ -186,7 +249,7 @@ NMreadPhi.
arguments.


# 0.1.1
# NMdata 0.1.1
## New features
* NMwriteSection can now handle functions to perform control stream
editing. NMwriteSection provides methods to edit control
Expand All @@ -202,7 +265,7 @@ NMreadPhi.
with a directory (`dir`) only.
* Minor bugfix in compareCols in case input is an unnamed list

# 0.1.0
# NMdata 0.1.0

## New features
* The super fast `fst` format is now supported. Data sets can be
Expand Down Expand Up @@ -271,7 +334,7 @@ This release provides a few bugfixes, nothing major.
cumulative number and it aligns with col.doscuma which is the
cumulative amount.

# 0.0.16
# NMdata 0.0.16
## New features
* `NMwriteSection()` includes argument `location`. In combination with
`section`, this determines where the new section is
Expand All @@ -295,11 +358,11 @@ on file. This has been fixed to support cases where renaming or a
pseudonym is being used to generate an `ID` column in `$INPUT`.


# 0.0.15
# NMdata 0.0.15
This update is of no difference to users. A technicality has been
chaned to ensure consistent test results once data.table 1.14.7 is

# 0.0.14
# NMdata 0.0.14
## New features
* `fnExtension()` has been generalized. It now ignores leading spaces in
new extension, and extensions with zero or one leading period are
Expand Down Expand Up @@ -362,9 +425,13 @@ chaned to ensure consistent test results once data.table 1.14.7 is
identifier is simplified.

* NMdata version added to welcome message.
# 0.0.13



# NMdata 0.0.13

## New functions

* `NMexpandDoses()` - Transform repeated dosing events (`ADDL`/`II`)
to individual dosing events
* `addTAPD()` - Add cumulative number of doses, time of last dose,
Expand All @@ -374,12 +441,14 @@ chaned to ensure consistent test results once data.table 1.14.7 is
has long been part of NMdata but has not been exported until now.

## New data

* A new data set called mad is included. It is based on the
mad_missing_duplicates from the `xgxr` package. Doses are implemented
using ADDL and II (so only one dosing row per subject). It is
included for testing the new NMexpandDoses and addTAPD functions.

## Bugfixes

* Non-critical bugfix in mergeCheck dimensions overview printed to
console. One column too many was reported in input and result
data. No implications on results from mergeCheck.
Expand Down
26 changes: 20 additions & 6 deletions R/NMapplyFilters.R
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ NMapplyFilters <- function(data,file,text,lines,invert=FALSE,as.fun,quiet) {
value <- NULL

### Section end: Dummy variables, only not to get NOTE's in pacakge checks


if(missing(quiet)) quiet <- NULL
quiet <- NMdataDecideOption("quiet",quiet)
Expand Down Expand Up @@ -71,11 +71,19 @@ NMapplyFilters <- function(data,file,text,lines,invert=FALSE,as.fun,quiet) {
## simplifying so IGNORE/IGN is always IGN
text3 <- gsub("IGNORE","IGN",text3)

conds.sc <- regmatches(text3, gregexpr("(?:IGN) *=* *[^ (+=]",text3))

## ^(.* )* : if anything before IGN, there must be a space in between
## conds.sc <- regmatches(text3, gregexpr("^(.* )*(?:IGN) *=* *[^ (+=]",text3))
conds.sc <- regmatches(text3, gregexpr("(?<![[:alnum:]])IGN *=* *[^ (+=]",text3,perl=T))
conds.sc
conds.sc <- do.call(c,conds.sc)
### getting rid of single char conditions
text3 <- gsub(paste0("IGN"," *=* *[^ (+=]"),"",text3)


## text3 <- gsub(paste0("^(\\(.* \\)*)IGN"," *=* *[^ (+=]"),"\\1",text3)
## gsub("^((.* )*)IGN *=* *[^ (+=](.*)","\\1\\2",text3)
## gsub("((.* )*)IGN *=* *[^ (+=](.*)","\\1\\2",text3)
text3 <- gsub("(?<![[:alnum:]])IGN *=* *[^ (+=]","",perl=TRUE,text3)

## check if IGNORE or ACCEPT are found. If both found, it is an error.
any.accepts <- any(grepl("ACCEPT",text3))
any.ignores <- any(grepl("IGN",text3))
Expand All @@ -89,7 +97,7 @@ NMapplyFilters <- function(data,file,text,lines,invert=FALSE,as.fun,quiet) {
type.condition <- "ACCEPT"
}


### expression-style ones
## this is not entirely correct.
### 1. A comma-separated list of expressions can be inside the ()s.
Expand All @@ -101,6 +109,7 @@ NMapplyFilters <- function(data,file,text,lines,invert=FALSE,as.fun,quiet) {




## translating single-charaters
name.c1 <- colnames(data)[1]
scs <- sub(paste0("IGN"," *=* *(.+)"),"\\1",conds.sc)
Expand Down Expand Up @@ -165,8 +174,11 @@ NMapplyFilters <- function(data,file,text,lines,invert=FALSE,as.fun,quiet) {

cond.combine <- "|"
## remember to negate everything if the type is ignore

if(type.condition=="IGN") {
expressions.list <- paste0("!",expressions.list)
if(length(expressions.list)){
expressions.list <- paste0("!",expressions.list)
}
cond.combine <- "&"
}

Expand All @@ -175,11 +187,13 @@ NMapplyFilters <- function(data,file,text,lines,invert=FALSE,as.fun,quiet) {
} else {
conditions.all.sc <- "TRUE"
}


expressions.all <- NULL
if(length(expressions.list)) {
expressions.all <- paste0("(",paste(expressions.list,collapse=cond.combine),")")
}


if(invert) {

Expand Down
24 changes: 18 additions & 6 deletions R/NMextractText.R
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,6 @@ NMextractText <- function(file, lines, text, section, char.section,
keepEmpty, keepName,
keepComments, asOne
){

nsection <- NULL
idx <- NULL

Expand Down Expand Up @@ -105,9 +104,13 @@ NMextractText <- function(file, lines, text, section, char.section,
}
if(type=="lst") type <- "res"

if(!match.exactly){
section <- substring(section,1,3)

if(!match.exactly && section!="."){
section <- paste0(substring(section,1,min(nchar(section),3)),"[A-Z]*")
}
## if(match.exactly && section!="."){
## section <- paste0(section,"[^[A-Z]]*")
## }


### Section end: Pre-process arguments
Expand All @@ -134,9 +137,18 @@ NMextractText <- function(file, lines, text, section, char.section,
},
all={lines}
)

if(F){
##:ess-bp-start::conditional@grepl("omega",section,ignore.case=T):##
browser(expr={grepl("omega",section,ignore.case=T)})##:ess-bp-end:##
}
## Find all the lines that start with the $section
idx.starts <- grep(paste0("^ *",char.section,section),lines)
## idx.starts <- grep(paste0("^ *",char.section,section," *"),lines)
if(match.exactly){
idx.starts <- (1:length(lines))[grepl(paste0("^ *",char.section,section,"[^A-Z]*"),lines) &
!grepl(paste0("^ *",char.section,section,"[A-Z]+"),lines) ]
} else {
idx.starts <- (1:length(lines))[grepl(paste0("^ *",char.section,section,"[^A-Z]*"),lines)]
}
idx.ends <- grep(paste0("^ *",char.end),lines)

## get the sections
Expand Down Expand Up @@ -187,7 +199,7 @@ NMextractText <- function(file, lines, text, section, char.section,

## result <- lapply(result, function(x)sub(paste0("^ *\\$",section,"[a-zA-Z]*"),"",x))
## "[a-zA-Z]*" is needed for abbrev section names. Like for SIMULATION in case of SIM.
dt.res[,text:=sub(paste0("^ *\\$",section,"[a-zA-Z]*"),"",text)]
dt.res[,text:=sub(paste0("^ *\\$",section),"",text)]
}


Expand Down
Loading

0 comments on commit 4390c98

Please sign in to comment.