diff --git a/articles/mcradds.html b/articles/mcradds.html index eb880b5..f8bdd2c 100644 --- a/articles/mcradds.html +++ b/articles/mcradds.html @@ -563,7 +563,7 @@

Hypothesis of Pearson and SpearmanspearmanTest(x, y, h0 = 0.5, alternative = "greater") #> $stat #> cor lowerci upperci Z pval -#> 0.6000000 -0.1431650 0.9640455 0.3243526 0.3728355 +#> 0.6000000 -0.1583052 0.9800536 0.3243526 0.3728355 #> #> $method #> [1] "Spearman's correlation" @@ -621,8 +621,8 @@

Establishing Reference Range/Inter #> N = 240 #> Outliers: NULL #> Reference Interval: 9.04, 10.32 -#> RefLower Confidence Interval: 8.9792, 9.0973 -#> Refupper Confidence Interval: 10.2589, 10.3748 +#> RefLower Confidence Interval: 8.9801, 9.0969 +#> Refupper Confidence Interval: 10.2576, 10.3760

The first two methods are also accepted by NMPA guideline, but the robust method is not recommended by NMPA because if you want to establish a reference interval for your assay, you must collect the at diff --git a/pkgdown.yml b/pkgdown.yml index bed37cb..044032c 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -3,7 +3,7 @@ pkgdown: 2.0.7 pkgdown_sha: ~ articles: mcradds: mcradds.html -last_built: 2023-10-12T03:36Z +last_built: 2023-10-12T03:52Z urls: reference: https://kaigu1990.github.io/mcradds/reference article: https://kaigu1990.github.io/mcradds/articles diff --git a/reference/Rplot003.png b/reference/Rplot003.png index 2574593..3275833 100644 Binary files a/reference/Rplot003.png and b/reference/Rplot003.png differ diff --git a/reference/autoplot-3.png b/reference/autoplot-3.png index f054b2b..2e442fc 100644 Binary files a/reference/autoplot-3.png and b/reference/autoplot-3.png differ diff --git a/reference/getAccuracy.html b/reference/getAccuracy.html index 607bee6..a89e64d 100644 --- a/reference/getAccuracy.html +++ b/reference/getAccuracy.html @@ -219,9 +219,9 @@

Examples ) getAccuracy(tb2, ref = "bnr") #> EST LowerCI UpperCI -#> apa 0.9479 0.9246 0.9679 -#> ana 0.9540 0.9328 0.9714 -#> opa 0.9511 0.9289 0.9689 +#> apa 0.9479 0.9245 0.9671 +#> ana 0.9540 0.9336 0.9714 +#> opa 0.9511 0.9311 0.9689 getAccuracy(tb2, ref = "bnr", rng.seed = 12306) #> EST LowerCI UpperCI #> apa 0.9479 0.9260 0.9686 diff --git a/search.json b/search.json index 0c7d374..34cdc88 100644 --- a/search.json +++ b/search.json @@ -1 +1 @@ -[{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"Introduction to mcradds","text":"vignette shows general purpose usage mcradds R package. mcradds successor mcr R package developed Roche, therefore fundamental coding ideas method comparison regression borrowed . addition, supplement series useful functions methods based several reference documents CLSI NMPA guidance. can perform statistical analysis graphics different IVD trials utilizing analytical functions. However, unfortunately functions methods validated QC’ed, can guarantee entirely proper error-free. always strive compare results resources order obtain consistent . utilized past usual work process, believe quality package temporarily sufficient use. vignette going learn : Estimate sample size trials, following NMPA guideline. Evaluate diagnostic accuracy /without reference, following CLSI EP12-A2. Perform regression methods analysis plots, following CLSI EP09-A3. Perform bland-Altman analysis plots, following CLSI EP09-A3. Detect outliers 4E method CLSI EP09-A2 ESD CLSI EP09-A3. Estimate bias medical decision level, following CLSI EP09-A3. Perform Pearson Spearman correlation analysis adding hypothesis test confidence interval. Evaluate Reference Range/Interval, following CLSI EP28-A3 NMPA guideline. Add paired ROC/AUC test superiority non-inferiority trials, following CLSI EP05-A3/EP15-A3. Perform reproducibility analysis (reader precision) immunohistochemical assays, following CLSI /LA28-A2 NMPA guideline. Evaluate precision quantitative measurements, following CLSI EP05-A3. reference mcradds functions available mcradds website functions reference.","code":"browseVignettes(package = \"mcradds\")"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"common-ivd-trials-analyses","dir":"Articles","previous_headings":"","what":"Common IVD Trials Analyses","title":"Introduction to mcradds","text":"Every analysis purpose can achieved functions S4 methods mcradds package, present general usage . packages used vignette : data sets different purposes used vignette :","code":"library(mcradds) data(\"qualData\") data(\"platelet\") # data(creatinine, package = \"mcr\") data(\"calcium\") data(\"ldlroc\") data(\"PDL1RP\") data(\"glucose\")"},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"example-1-1","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Estimation of Sample Size","what":"Example 1.1","title":"Introduction to mcradds","text":"Suppose expected sensitivity criteria new assay 0.9, clinical acceptable criteria 0.85. conduct two-sided normal Z-test significance level α = 0.05 achieve power 80%, total sample 363.","code":"size_one_prop(p1 = 0.9, p0 = 0.85, alpha = 0.05, power = 0.8) #> #> Sample size determination for one Proportion #> #> Call: size_one_prop(p1 = 0.9, p0 = 0.85, alpha = 0.05, power = 0.8) #> #> optimal sample size: n = 363 #> #> p1:0.9 p0:0.85 alpha:0.05 power:0.8 alternative:two.sided"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"example-1-2","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Estimation of Sample Size","what":"Example 1.2","title":"Introduction to mcradds","text":"Suppose expected sensitivity criteria new assay 0.85, lower 95% confidence interval Wilson Score significance level α = 0.05 criteria 0.8, total sample 246. don’t want use CI Wilson Score just following NMPA’s suggestion appendix, CI Simple-asymptotic recommended 196 sample size, shown .","code":"size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wilson\") #> #> Sample size determination for a Given Lower Confidence Interval #> #> Call: size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wilson\") #> #> optimal sample size: n = 246 #> #> p:0.85 lr:0.8 alpha:0.05 interval:c(1, 1e+05) tol:1e-05 alternative:two.sided method:wilson size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"simple-asymptotic\") #> #> Sample size determination for a Given Lower Confidence Interval #> #> Call: size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"simple-asymptotic\") #> #> optimal sample size: n = 196 #> #> p:0.85 lr:0.8 alpha:0.05 interval:c(1, 1e+05) tol:1e-05 alternative:two.sided method:simple-asymptotic"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"example-1-3","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Estimation of Sample Size","what":"Example 1.3","title":"Introduction to mcradds","text":"Suppose expected correlation coefficient test reference assays 0.95, clinical acceptable criteria 0.9. conduct one-sided test significance level α = 0.025 achieve power 80%, total sample 64.","code":"size_corr(r1 = 0.95, r0 = 0.9, alpha = 0.025, power = 0.8, alternative = \"greater\") #> #> Sample size determination for testing Pearson's Correlation #> #> Call: size_corr(r1 = 0.95, r0 = 0.9, alpha = 0.025, power = 0.8, alternative = \"greater\") #> #> optimal sample size: n = 64 #> #> r1:0.95 r0:0.9 alpha:0.025 power:0.8 alternative:greater"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"example-1-4","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Estimation of Sample Size","what":"Example 1.4","title":"Introduction to mcradds","text":"Suppose expected correlation coefficient test reference assays 0.9, lower 95% confidence interval significance level α = 0.025 criteria greater 0.85, total sample 86.","code":"size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> Sample size determination for a Given Lower Confidence Interval of Pearson's Correlation #> #> Call: size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> optimal sample size: n = 86 #> #> r:0.9 lr:0.85 alpha:0.025 interval:c(10, 1e+05) tol:1e-05 alternative:greater"},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"create-2x2-contingency-table","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Evaluation of Diagnostic Accuracy","what":"Create 2x2 contingency table","title":"Introduction to mcradds","text":"Assume wide structure data like qualData contains measurements candidate comparative assays. scenario, ’d better define formula candidate assay first, followed comparative assay right formula, right ~. , add dimname argument indicate row column names 2x2 contingency table, define order levels prefer . Assume long structure data needs summarized, dummy data shown . formula define another format. left formula type assay, right measurement.","code":"head(qualData) #> Sample ComparativeN CandidateN #> 1 ID1 1 1 #> 2 ID2 1 0 #> 3 ID3 0 0 #> 4 ID4 1 0 #> 5 ID5 1 1 #> 6 ID6 1 1 tb <- qualData %>% diagTab( formula = ~ CandidateN + ComparativeN, levels = c(1, 0) ) tb #> Contingency Table: #> #> levels: 1 0 #> ComparativeN #> CandidateN 1 0 #> 1 122 8 #> 0 16 54 dummy <- data.frame( id = c(\"1001\", \"1001\", \"1002\", \"1002\", \"1003\", \"1003\"), value = c(1, 0, 0, 0, 1, 1), type = c(\"Test\", \"Ref\", \"Test\", \"Ref\", \"Test\", \"Ref\") ) %>% diagTab( formula = type ~ value, bysort = \"id\", dimname = c(\"Test\", \"Ref\"), levels = c(1, 0) ) dummy #> Contingency Table: #> #> levels: 1 0 #> Ref #> Test 1 0 #> 1 1 1 #> 0 0 1"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"with-referencegold-standard","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Evaluation of Diagnostic Accuracy","what":"With Reference/Gold Standard","title":"Introduction to mcradds","text":"Next step utilize getAccuracy method calculate diagnostic accuracy. reference assay gold standard, argument ref r means ‘reference’. output present several indicators, sensitivity (sens), specificity (spec), positive/negative predictive value (ppv/npv) positive/negative likelihood ratio (plr/nlr). details can found ?getAccuracy.","code":"# Default method is Wilson score, and digit is 4. tb %>% getAccuracy(ref = \"r\") #> EST LowerCI UpperCI #> sens 0.8841 0.8200 0.9274 #> spec 0.8710 0.7655 0.9331 #> ppv 0.9385 0.8833 0.9685 #> npv 0.7714 0.6605 0.8541 #> plr 6.8514 3.5785 13.1181 #> nlr 0.1331 0.0832 0.2131 # Alter the number of digit to 2. tb %>% getAccuracy(ref = \"r\", digit = 2) #> EST LowerCI UpperCI #> sens 0.88 0.82 0.93 #> spec 0.87 0.77 0.93 #> ppv 0.94 0.88 0.97 #> npv 0.77 0.66 0.85 #> plr 6.85 3.58 13.12 #> nlr 0.13 0.08 0.21 # Alter the number of digit to 2. tb %>% getAccuracy(ref = \"r\", r_ci = \"clopper-pearson\") #> EST LowerCI UpperCI #> sens 0.8841 0.8186 0.9323 #> spec 0.8710 0.7615 0.9426 #> ppv 0.9385 0.8823 0.9731 #> npv 0.7714 0.6555 0.8633 #> plr 6.8514 3.5785 13.1181 #> nlr 0.1331 0.0832 0.2131"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"without-referencegold-standard","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Evaluation of Diagnostic Accuracy","what":"Without Reference/Gold Standard","title":"Introduction to mcradds","text":"reference assay gold standard, example, comparative assay approved market sale, ref nr means ‘reference’. output present indicators, positive/negative percent agreement (ppa/npa) overall percent agreement (opa).","code":"# When the reference is a comparative assay, not gold standard. tb %>% getAccuracy(ref = \"nr\", nr_ci = \"wilson\") #> EST LowerCI UpperCI #> ppa 0.8841 0.8200 0.9274 #> npa 0.8710 0.7655 0.9331 #> opa 0.8800 0.8277 0.9180"},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"estimating-regression-coefficient","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Regression coefficient and bias in medical decision level","what":"Estimating Regression coefficient","title":"Introduction to mcradds","text":"Regression agreement important criteria method comparison trials can achieved mcr package provided series regression methods, ‘Deming’, ‘Passing-Bablok’,’ weighted Deming’ . main key functions wrapped mcradds, mcreg, getCoefficients calcBias. like utilize entire functions mcr package, just adding specific package name front , like mcr::calcBias(), looks function called mcr package. Please noted mcr package available CRAN, mcreg mcreg2 function can used temporarily.","code":"# Deming regression fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, error.ratio = 1, method.reg = \"Deming\", method.ci = \"jackknife\" ) printSummary(fit) getCoefficients(fit)"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"estimating-bias-in-medical-decision-level","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Regression coefficient and bias in medical decision level","what":"Estimating Bias in Medical Decision Level","title":"Introduction to mcradds","text":"obtained regression equation, whether ‘Deming’ ‘Passing-Bablok’, can use estimate bias medical decision level. Suppose know medical decision level one assay 30, obviously make-number. can use fit object estimate bias using calcBias function. Please noted mcr package available CRAN, calcBias function can used temporarily.","code":"# absolute bias. calcBias(fit, x.levels = c(30)) # proportional bias. calcBias(fit, x.levels = c(30), type = \"proportional\")"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"bland-altman-analysis","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Bland-Altman Analysis","title":"Introduction to mcradds","text":"Bland-Altman analysis also agreement criteria method comparison trials. term authority’s request, normally present two categories: absolute difference relative difference, order evaluate agreements aspects. outputs descriptive statistics, including ‘mean’, ‘median’, ‘Q1’, ‘Q3’, ‘min’, ‘max’, ‘CI’ (confidence interval mean) ‘LoA’ (Limit Agreement). Please make sure difference type calculation, answer question define absolute relative difference. details information can found ?h_difference, five types available option. Default absolute difference derived Y-X, relative difference (Y-X)/(0.5*(X+Y)). Sometime think reference (X) gold standard good agreement test (Y), relative difference type can type2 = 4.","code":"# Default difference type blandAltman( x = platelet$Comparative, y = platelet$Candidate, type1 = 3, type2 = 5 ) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate, #> type1 = 3, type2 = 5) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/(0.5*(X+Y)) #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.064 ( 0.145) #> Median 6.350 0.055 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.118) #> Min, Max (-47.800, 42.100) (-0.412, 0.667) #> Limit of Agreement (-24.011, 38.671) (-0.220, 0.347) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.038, 0.089) # Change relative different type to 4. blandAltman( x = platelet$Comparative, y = platelet$Candidate, type1 = 3, type2 = 4 ) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate, #> type1 = 3, type2 = 4) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/X #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.078 ( 0.173) #> Median 6.350 0.056 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.125) #> Min, Max (-47.800, 42.100) (-0.341, 1.000) #> Limit of Agreement (-24.011, 38.671) (-0.261, 0.417) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.047, 0.109)"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"detecting-outliers","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Detecting Outliers","title":"Introduction to mcradds","text":"know, numerous statistical methodologies detect outliers. try show methods commonly used IVD trials different purposes. First foremost, quantitative data generate outliers, detecting process occurred quantitative trials. method comparison trials, detected outliers used sensitive analysis common. example, detect 5 outliers 200 subjects trial, conduct sensitive analysis without outliers interpret difference scenarios. two CLSI’s recommended approaches,4E ESD, wit latter one recommended recent version. mcradds package, can utilize getOutlier method detect outliers method argument define method ’d like, difference arguments difference type like ‘absolute’ ‘relative’ used. addition, mcradds also provides outlier methods evaluating Reference Range, ‘Tukey’ ‘Dixon’ wrapped refInterval() function.","code":"# ESD approach ba <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) out <- getOutlier(ba, method = \"ESD\", difference = \"rel\") out$stat #> i Mean SD x Obs ESDi Lambda Outlier #> 1 1 0.06356753 0.1447540 0.6666667 1 4.166372 3.445148 TRUE #> 2 2 0.05849947 0.1342496 0.5783972 4 3.872621 3.442394 TRUE #> 3 3 0.05409356 0.1258857 0.5321101 2 3.797226 3.439611 TRUE #> 4 4 0.05000794 0.1183096 -0.4117647 10 3.903086 3.436800 TRUE #> 5 5 0.05398874 0.1106738 -0.3132530 14 3.318236 3.433961 FALSE #> 6 6 0.05718215 0.1056542 -0.2566372 23 2.970250 3.431092 FALSE out$outmat #> sid x y #> 1 1 1.5 3.0 #> 2 2 4.0 6.9 #> 3 4 10.2 18.5 #> 4 10 16.4 10.8 # 4E approach ba2 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) out2 <- getOutlier(ba2, method = \"4E\") #> No outlier is detected. out2$stat #> NULL out2$outmat #> NULL"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"hypothesis-of-pearson-and-spearman","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Hypothesis of Pearson and Spearman","title":"Introduction to mcradds","text":"correlation coefficient Pearson helpful criteria assessing agreement test reference assays. compute coefficient P value R, cor.test() function commonly used. However P value relies hypothesis H0=0, doesn’t meet requirement authority. required provide P value H0=0.7 sometimes. Thus case, suggest use pearsonTest() function instead, hypothesis based Fisher’s Z transformation correlation. Since cor.test() function can provide confidence interval special hypothesis Spearman, spearmanTest() function recommended. function computes CI using bootstrap method, hypothesis based Fisher’s Z transformation correlation, variance proposed Bonett Wright (2000), Pearson’s.","code":"x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) pearsonTest(x, y, h0 = 0.5, alternative = \"greater\") #> $stat #> cor lowerci upperci Z pval #> 0.5711816 -0.1497426 0.8955795 0.2448722 0.4032777 #> #> $method #> [1] \"Pearson's correlation\" #> #> $conf.level #> [1] 0.95 x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) spearmanTest(x, y, h0 = 0.5, alternative = \"greater\") #> $stat #> cor lowerci upperci Z pval #> 0.6000000 -0.1431650 0.9640455 0.3243526 0.3728355 #> #> $method #> [1] \"Spearman's correlation\" #> #> $conf.level #> [1] 0.95"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"establishing-reference-rangeinterval","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Establishing Reference Range/Interval","title":"Introduction to mcradds","text":"refInterval function provides two outlier methods Tukey Dixon, three methods mentioned CLSI establish reference interval (RI). first parametric method follows normal distribution compute confidence interval. second one nonparametric method computes 2.5th 97.5th percentile range reference interval 95%. third one robust method, slightly complicated involves iterative procedure based formulas EP28A3. observations weighted according distance central tendency initially estimated median MAD(median absolute deviation). first two methods also accepted NMPA guideline, robust method recommended NMPA want establish reference interval assay, must collect least 120 samples China. number less 120, can ensure accuracy results. CLSI working group hesitant recommend method well, except extreme instances. default, confidence interval (CI) presented depending RI method utilized. RI method parametric, CI method parametric well. RI method nonparametric sample size 120 observations, nonparametric CI suggested. Otherwise sample size 120, boot method CI better choice. need aware nonparametric method CI allows refLevel = 0.95 confLevel = 0.9 arguments, boot methods CI used automatically. RI method robust method, method CI must boot. like compute 90% reference interval rather 90%, just alter refLevel = 0.9. confidence interval similar altered confLevel = 0.95 like compute 95% confidence interval limit reference interval.","code":"refInterval(x = calcium$Value, RI_method = \"parametric\", CI_method = \"parametric\") #> #> Reference Interval Method: parametric, Confidence Interval Method: parametric #> #> Call: refInterval(x = calcium$Value, RI_method = \"parametric\", CI_method = \"parametric\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.05, 10.32 #> RefLower Confidence Interval: 8.9926, 9.1100 #> Refupper Confidence Interval: 10.2584, 10.3757 refInterval(x = calcium$Value, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> Reference Interval Method: nonparametric, Confidence Interval Method: nonparametric #> #> Call: refInterval(x = calcium$Value, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.10, 10.30 #> RefLower Confidence Interval: 8.9000, 9.2000 #> Refupper Confidence Interval: 10.3000, 10.4000 refInterval(x = calcium$Value, RI_method = \"robust\", CI_method = \"boot\") #> [1] \"Bootstrape process could take a short while.\" #> #> Reference Interval Method: robust, Confidence Interval Method: boot #> #> Call: refInterval(x = calcium$Value, RI_method = \"robust\", CI_method = \"boot\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.04, 10.32 #> RefLower Confidence Interval: 8.9792, 9.0973 #> Refupper Confidence Interval: 10.2589, 10.3748"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"paired-auc-test","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Paired AUC Test","title":"Introduction to mcradds","text":"aucTest function compares two AUC paired two-sample diagnostic assays using standardized difference method, small difference SE computation compared unpaired design. samples paired design considered independent, SE can computed directly Delong’s method pROC package. order evaluate two paired assays, aucTest function three assessment methods including ‘difference’, ‘non-inferiority’ ‘superiority’, shown Liu(2006)’s article . Jen-Pei Liu (2006) “Tests equivalence non-inferiority diagnostic accuracy based paired areas ROC curves”. Statist. Med., 25:1219–1238. DOI: 10.1002/sim.2358. Suppose want compare paired AUC OxLDL LDL assays ldlroc data set, null hypothesis difference AUC area. Suppose want see OxLDL assay superior LDL assay margin equal 0.1. case null hypothesis difference less 0.1. Suppose want see OxLDL assay non-inferior LDL assay margin equal -0.1. case null hypothesis difference less -0.1.","code":"# H0 : Difference between areas = 0: aucTest(x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing difference based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is difference to 0 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 3.0088 #> Pvalue: 0.002623 # H0 : Superiority margin <= 0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = \"superiority\", h0 = 0.1 ) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing superiority based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is superiority to 0.1 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 1.7436 #> Pvalue: 0.04061 # H0 : Non-inferiority margin <= -0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = \"non-inferiority\", h0 = -0.1 ) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing non-inferiority based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is non-inferiority to -0.1 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 4.2739 #> Pvalue: 9.606e-06"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"reproducibility-analysis-reader-precision","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Reproducibility Analysis (Reader Precision)","title":"Introduction to mcradds","text":"PDL1 assay trials, must estimate reader precision different readers reads sites, using APA, ANA OPA primary endpoint. getAccuracy function can implement computations reader precision trials belong qualitative trials. distinction trial, comparative assay, just stained specimen scored different pathologists (readers). can determine one can reference, instead compare comparison. PDL1RP example data, 150 specimens stained one PD-L1 assay three different sites, 50 specimens . PDL1RP$wtn_reader sub-data, 3 readers selected three different sites responsible scoring 50 specimens . Thus might want evaluate reproducibility within three readers three site. PDL1RP$wtn_reader sub-data, one reader selected three different sites responsible scoring 50 specimens 3 times minimum 2 weeks reads means process score. Thus might want evaluate reproducibility within three reads specimens. PDL1RP$btw_site sub-data, one reader selected three different sites responsible scoring 150 specimens , collected three sites. Thus might want evaluate reproducibility within three site.","code":"reader <- PDL1RP$btw_reader tb1 <- reader %>% diagTab( formula = Reader ~ Value, bysort = \"Sample\", levels = c(\"Positive\", \"Negative\"), rep = TRUE, across = \"Site\" ) getAccuracy(tb1, ref = \"bnr\", rng.seed = 12306) #> EST LowerCI UpperCI #> apa 0.9479 0.9260 0.9686 #> ana 0.9540 0.9342 0.9730 #> opa 0.9511 0.9311 0.9711 read <- PDL1RP$wtn_reader tb2 <- read %>% diagTab( formula = Order ~ Value, bysort = \"Sample\", levels = c(\"Positive\", \"Negative\"), rep = TRUE, across = \"Sample\" ) getAccuracy(tb2, ref = \"bnr\", rng.seed = 12306) #> EST LowerCI UpperCI #> apa 0.9442 0.9204 0.9657 #> ana 0.9489 0.9273 0.9681 #> opa 0.9467 0.9244 0.9667 site <- PDL1RP$btw_site tb3 <- site %>% diagTab( formula = Site ~ Value, bysort = \"Sample\", levels = c(\"Positive\", \"Negative\"), rep = TRUE, across = \"Sample\" ) getAccuracy(tb2, ref = \"bnr\", rng.seed = 12306) #> EST LowerCI UpperCI #> apa 0.9442 0.9204 0.9657 #> ana 0.9489 0.9273 0.9681 #> opa 0.9467 0.9244 0.9667"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"precision-evaluation","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Precision Evaluation","title":"Introduction to mcradds","text":"precision evaluation commonly used IVD trials, necessary include process end-users laboratories’ QC procedure verifying repeatability within-laboratory precision. wrapped main key functions Roche’s VCA, well mcr package. ’s recommended read details ?anovaVCA ?VCAinference functions CLSI-EP05 help understanding outputs, CV%.","code":"fit <- anovaVCA(value ~ day / run, glucose) VCAinference(fit) #> #> #> #> Inference from (V)ariance (C)omponent (A)nalysis #> ------------------------------------------------ #> #> > VCA Result: #> ------------- #> #> Name DF SS MS VC %Total SD CV[%] #> 1 total 64.7773 12.9336 100 3.5963 1.4727 #> 2 day 19 415.8 21.8842 1.9586 15.1432 1.3995 0.5731 #> 3 day:run 20 281 14.05 3.075 23.7754 1.7536 0.7181 #> 4 error 40 316 7.9 7.9 61.0814 2.8107 1.151 #> #> Mean: 244.2 (N = 80) #> #> Experimental Design: balanced | Method: ANOVA #> #> #> > VC: #> ----- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 12.9336 9.4224 18.8614 9.9071 17.7278 #> day 1.9586 #> day:run 3.0750 #> error 7.9000 5.3251 12.9333 5.6673 11.9203 #> #> > SD: #> ----- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 3.5963 3.0696 4.3430 3.1476 4.2104 #> day 1.3995 #> day:run 1.7536 #> error 2.8107 2.3076 3.5963 2.3806 3.4526 #> #> > CV[%]: #> -------- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 1.4727 1.257 1.7785 1.2889 1.7242 #> day 0.5731 #> day:run 0.7181 #> error 1.1510 0.945 1.4727 0.9749 1.4138 #> #> #> 95% Confidence Level #> SAS PROC MIXED method used for computing CIs"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"common-visualizations","dir":"Articles","previous_headings":"","what":"Common Visualizations","title":"Introduction to mcradds","text":"term visualizations IVD trials, two common plots presented clinical reports, Bland-Altman plot Regression plot. don’t use two different functions draw plots, included autoplot() function. plots can obtained just call autoplot() object.","code":""},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"bland-altman-plot","dir":"Articles","previous_headings":"Common Visualizations","what":"Bland-Altman plot","title":"Introduction to mcradds","text":"generate Bland-Altman plot, create object blandAltman() function call autoplot straightforward can choose Bland-Altman type require, ‘absolute’ ‘relative’. Add drawing arguments like adjust format. detailed arguments can found ?autoplot.","code":"object <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) # Absolute difference plot autoplot(object, type = \"absolute\") # Relative difference plot autoplot(object, type = \"relative\") autoplot( object, type = \"absolute\", jitter = TRUE, fill = \"lightblue\", color = \"grey\", size = 2, ref.line.params = list(col = \"grey\"), loa.line.params = list(col = \"grey\"), label.digits = 2, label.params = list(col = \"grey\", size = 3, fontface = \"italic\"), x.nbreak = 6, main.title = \"Bland-Altman Plot\", x.title = \"Mean of Test and Reference Methods\", y.title = \"Reference - Test\" )"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"regression-plot","dir":"Articles","previous_headings":"Common Visualizations","what":"Regression plot","title":"Introduction to mcradds","text":"generate regression plot, create object mcreg() function call autoplot straightforward. Please noted mcr package available CRAN, mcreg mcreg2 function can used temporarily. arguments can used shown .","code":"fit <- mcreg2( x = platelet$Comparative, y = platelet$Candidate, method.reg = \"PaBa\", method.ci = \"bootstrap\" ) autoplot(fit) autoplot( fit, identity.params = list(col = \"blue\", linetype = \"solid\"), reg.params = list(col = \"red\", linetype = \"solid\"), equal.axis = TRUE, legend.title = FALSE, legend.digits = 3, x.title = \"Reference\", y.title = \"Test\" )"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"summary","dir":"Articles","previous_headings":"","what":"Summary","title":"Introduction to mcradds","text":"summary, mcradds contains multiple functions methods internal statistical analyses QC procedure IVD trials. design package aims expand analysis scope mcr package , give users lot flexibility meeting analysis needs. Given package validated GCP process, ’s recommended use regulatory submissions. However can give assist supplementary analysis needs regulatory.","code":""},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"session-info","dir":"Articles","previous_headings":"","what":"Session Info","title":"Introduction to mcradds","text":"output sessionInfo() system.","code":"#> R version 4.3.1 (2023-06-16) #> Platform: x86_64-pc-linux-gnu (64-bit) #> Running under: Ubuntu 22.04.3 LTS #> #> Matrix products: default #> BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 #> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so; LAPACK version 3.10.0 #> #> locale: #> [1] LC_CTYPE=C.UTF-8 LC_NUMERIC=C LC_TIME=C.UTF-8 #> [4] LC_COLLATE=C.UTF-8 LC_MONETARY=C.UTF-8 LC_MESSAGES=C.UTF-8 #> [7] LC_PAPER=C.UTF-8 LC_NAME=C LC_ADDRESS=C #> [10] LC_TELEPHONE=C LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C #> #> time zone: UTC #> tzcode source: system (glibc) #> #> attached base packages: #> [1] stats graphics grDevices datasets utils methods base #> #> other attached packages: #> [1] mcradds_1.0.1 #> #> loaded via a namespace (and not attached): #> [1] gld_2.6.6 gtable_0.3.4 xfun_0.40 #> [4] bslib_0.5.1 ggplot2_3.4.3 lattice_0.21-8 #> [7] numDeriv_2016.8-1.1 vctrs_0.6.3 tools_4.3.1 #> [10] generics_0.1.3 tibble_3.2.1 proxy_0.4-27 #> [13] fansi_1.0.5 pkgconfig_2.0.3 Matrix_1.5-4.1 #> [16] data.table_1.14.8 checkmate_2.2.0 desc_1.4.2 #> [19] readxl_1.4.3 lifecycle_1.0.3 rootSolve_1.8.2.4 #> [22] farver_2.1.1 compiler_4.3.1 stringr_1.5.0 #> [25] textshaping_0.3.7 Exact_3.2 munsell_0.5.0 #> [28] htmltools_0.5.6.1 DescTools_0.99.50 class_7.3-22 #> [31] sass_0.4.7 yaml_2.3.7 nloptr_2.0.3 #> [34] pillar_1.9.0 pkgdown_2.0.7 jquerylib_0.1.4 #> [37] MASS_7.3-60 cachem_1.0.8 boot_1.3-28.1 #> [40] nlme_3.1-162 tidyselect_1.2.0 digest_0.6.33 #> [43] mvtnorm_1.2-3 stringi_1.7.12 dplyr_1.1.3 #> [46] purrr_1.0.2 labeling_0.4.3 splines_4.3.1 #> [49] rprojroot_2.0.3 fastmap_1.1.1 grid_4.3.1 #> [52] colorspace_2.1-0 lmom_3.0 expm_0.999-7 #> [55] cli_3.6.1 magrittr_2.0.3 utf8_1.2.3 #> [58] VCA_1.4.5 e1071_1.7-13 withr_2.5.1 #> [61] scales_1.2.1 backports_1.4.1 rmarkdown_2.25 #> [64] httr_1.4.7 lme4_1.1-34 cellranger_1.1.0 #> [67] ragg_1.2.6 memoise_2.0.1 evaluate_0.22 #> [70] knitr_1.44 rlang_1.1.1 Rcpp_1.0.11 #> [73] glue_1.6.2 renv_0.15.5 pROC_1.18.4 #> [76] minqa_1.2.6 rstudioapi_0.15.0 jsonlite_1.8.7 #> [79] plyr_1.8.9 R6_2.5.1 systemfonts_1.0.5 #> [82] fs_1.6.3"},{"path":"https://kaigu1990.github.io/mcradds/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Kai Gu. Author, maintainer, copyright holder.","code":""},{"path":"https://kaigu1990.github.io/mcradds/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Gu K (2023). mcradds: Processing Analyzing IVD Trials. https://github.com/kaigu1990/mcradds, https://kaigu1990.github.io/mcradds/.","code":"@Manual{, title = {mcradds: Processing and Analyzing of IVD Trials}, author = {Kai Gu}, year = {2023}, note = {https://github.com/kaigu1990/mcradds, https://kaigu1990.github.io/mcradds/}, }"},{"path":"https://kaigu1990.github.io/mcradds/index.html","id":"mcradds-","dir":"","previous_headings":"","what":"Processing and Analyzing of IVD Trials","title":"Processing and Analyzing of IVD Trials","text":"mcradds R package complement mcr package, contains common solid functions designing, analyzing visualization Vitro Diagnostic trials. methods algorithms refer CLSI recommendations NMPA guidelines. package provides series typical functionality, shown : Estimation sample size trials, NMPA guideline. Diagnostic accuracy /without standard/golden reference, CLSI EP12-A2. Regression analysis plot method comparison, CLSI EP09-A3. Bland-Altman analysis plot method comparison, CLSI EP09-A3. Outlier detection 4E method CLSI EP09-A2 ESD CLSI EP09-A3. Evaluation bias medical decision level, CLSI EP09-A3. Pearson Spearman correlation adding hypothesis test confidence interval. Establishing Reference Range/Interval, CLSI EP28-A3 NMPA guideline. Paired ROC/AUC test superiority non-inferiority trials, CLSI EP05-A3/EP15-A3. Reproducibility analysis (reader precision) immunohistochemical assays, CLSI /LA28-A2 NMPA guideline. Evaluation precision quantitative measurements, CLSI EP05-A3.","code":""},{"path":"https://kaigu1990.github.io/mcradds/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Processing and Analyzing of IVD Trials","text":"mcradds available CRAN can install latest released version : can install development version directly GitHub : See package vignettes browseVignettes(package = \"mcradds\") usage package.","code":"install.packages(\"mcradds\") if (!require(\"devtools\")) { install.packages(\"devtools\") } devtools::install_github(\"kaigu1990/mcradds\")"},{"path":"https://kaigu1990.github.io/mcradds/reference/BAsummary-class.html","id":null,"dir":"Reference","previous_headings":"","what":"BAsummary Class — BAsummary-class","title":"BAsummary Class — BAsummary-class","text":"BAsummary class used display BlandAltman analysis outliers.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/BAsummary-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"BAsummary Class — BAsummary-class","text":"","code":"BAsummary(call, data, stat, param)"},{"path":"https://kaigu1990.github.io/mcradds/reference/BAsummary-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"BAsummary Class — BAsummary-class","text":"call (call) function call. data (data.frame) stores raw data input. stat (list) contains several statistics numeric data. param (list) list relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/BAsummary-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"BAsummary Class — BAsummary-class","text":"object class BAsummary.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/BAsummary-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"BAsummary Class — BAsummary-class","text":"call call data data outlier outlier param param","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":null,"dir":"Reference","previous_headings":"","what":"EDS Test for Outliers — ESD_test","title":"EDS Test for Outliers — ESD_test","text":"Perform Rosner's generalized extreme Studentized deviate (ESD) test, assumes distribution normal (Gaussian), can used number outliers unknown, becomes robust number samples increases.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"EDS Test for Outliers — ESD_test","text":"","code":"ESD_test(x, alpha = 0.05, h = 5)"},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"EDS Test for Outliers — ESD_test","text":"x (numeric) vector observations can difference Bland-Altman analysis. Normally relative difference preferred IVD trials. Missing(NA) allowed removed. must least 10 available observations x. alpha (numeric) type--risk, \\(\\alpha\\). h (integer) positive integer indicating number suspected outliers. argument h must 1 n-2 n denotes number available values x. default value h = 5.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"EDS Test for Outliers — ESD_test","text":"list class containing results ESD test. stat data frame contains several statistics ESD test includes index(), Mean, SD, raw data(x), location(Obs) x, ESD statistics(ESDi), Lambda Outliers(TRUE FALSE). ord vector order index outliers equal Obs stat data frame.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"EDS Test for Outliers — ESD_test","text":"algorithm determining number outliers follows: Compare ESDi Lambda. ESDi > Lambda observations regards outliers. order index corresponds available x data removed missing (NA) value. compare ESD(h) ESD(h+1) equal, h+1 ESD values shown. identical, can regarded outliers.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"EDS Test for Outliers — ESD_test","text":"CLSI EP09A3 Appendix B. Detecting Aberrant Results (Outliers).","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"EDS Test for Outliers — ESD_test","text":"","code":"data(\"platelet\") res <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) ESD_test(x = res@stat$relative_diff) #> $stat #> i Mean SD x Obs ESDi Lambda Outlier #> 1 1 0.06356753 0.1447540 0.6666667 1 4.166372 3.445148 TRUE #> 2 2 0.05849947 0.1342496 0.5783972 4 3.872621 3.442394 TRUE #> 3 3 0.05409356 0.1258857 0.5321101 2 3.797226 3.439611 TRUE #> 4 4 0.05000794 0.1183096 -0.4117647 10 3.903086 3.436800 TRUE #> 5 5 0.05398874 0.1106738 -0.3132530 14 3.318236 3.433961 FALSE #> 6 6 0.05718215 0.1056542 -0.2566372 23 2.970250 3.431092 FALSE #> #> $ord #> [1] 1 4 2 10 #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/MCR-class.html","id":null,"dir":"Reference","previous_headings":"","what":"Method Comparison Regression Class — MCR-class","title":"Method Comparison Regression Class — MCR-class","text":"MCR class serves simplified version MCResult mcr package. mcr package available CRAN, class took temporary replacement , contains necessaries autoplot.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCR-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Method Comparison Regression Class — MCR-class","text":"","code":"MCR(data, coef, mnames, regmeth)"},{"path":"https://kaigu1990.github.io/mcradds/reference/MCR-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Method Comparison Regression Class — MCR-class","text":"data (data.frame) original data. coef (numeric) numeric vector contains slope intercept. mnames (character) name X Y assays, default 'Method1' 'Method2' defined mcreg function. regmeth (character) name regression.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCR-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Method Comparison Regression Class — MCR-class","text":"object class MCR.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCR-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"Method Comparison Regression Class — MCR-class","text":"data data coef coef mnames mnames regmeth regmeth","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCTab-class.html","id":null,"dir":"Reference","previous_headings":"","what":"MCTab Class — MCTab-class","title":"MCTab Class — MCTab-class","text":"MCTab class serves store 2x2 contingency table","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCTab-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"MCTab Class — MCTab-class","text":"","code":"MCTab(data, tab, levels)"},{"path":"https://kaigu1990.github.io/mcradds/reference/MCTab-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"MCTab Class — MCTab-class","text":"data (data.frame) original data set. tab (table)table class converted table() display 2x2 contingency table. levels (character) levels measurements.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCTab-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"MCTab Class — MCTab-class","text":"object class MCTab.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCTab-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"MCTab Class — MCTab-class","text":"data data tab candidate levels levels","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/PDL1RP.html","id":null,"dir":"Reference","previous_headings":"","what":"PD-L1 Reader Precision Data — PDL1RP","title":"PD-L1 Reader Precision Data — PDL1RP","text":"dummy data set PD-L1 stained study estimate reproducibility one assay determining PD-L1 status NSCLC tissue specimens. contains three sub-data compute reproducibility within reader (one pathologists, also called reader , scores one specimen three times), reader (three readers scores specimen) site (one reader three sites scores specimens). data sets reference score can used pairwise comparison calculate APA, ANA OPA reply reference.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/PDL1RP.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"PD-L1 Reader Precision Data — PDL1RP","text":"","code":"PDL1RP"},{"path":"https://kaigu1990.github.io/mcradds/reference/PDL1RP.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"PD-L1 Reader Precision Data — PDL1RP","text":"PDL1RP data set contains 3 sub set, sub set includes 150 specimens, 450 observations 4 variables. Sample Sample id Site Site id Order Order reader scoring Reader Reader id, first character represents site id, second character reader number Value Result scoring, Positive Negative","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/RefInt-class.html","id":null,"dir":"Reference","previous_headings":"","what":"Reference Interval Class — RefInt-class","title":"Reference Interval Class — RefInt-class","text":"RefInt class serves store results reference Interval calculation.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/RefInt-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Reference Interval Class — RefInt-class","text":"","code":"RefInt(call, method, n, data, outlier, refInt, confInt)"},{"path":"https://kaigu1990.github.io/mcradds/reference/RefInt-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Reference Interval Class — RefInt-class","text":"call (call) function call. method (character) method names reference interval confidence interval. n (numeric) number available samples. data (numeric) numeric raw measurements, outlier removed. outlier (list) list outliers contains index number outliers, data without outliers. refInt (numeric) number reference interval. confInt (list) list confidence interval lower upper reference limit.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/RefInt-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Reference Interval Class — RefInt-class","text":"object class RefInt.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/RefInt-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"Reference Interval Class — RefInt-class","text":"call call method method n n data data outlier outlier refInt refInt confInt confInt","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/VCAinference.html","id":null,"dir":"Reference","previous_headings":"","what":"Inferential Statistics for VCA-Results — VCAinference","title":"Inferential Statistics for VCA-Results — VCAinference","text":"copy VCA::VCAinference VCA package","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/VCAinference.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Inferential Statistics for VCA-Results — VCAinference","text":"","code":"VCAinference(...)"},{"path":"https://kaigu1990.github.io/mcradds/reference/VCAinference.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Inferential Statistics for VCA-Results — VCAinference","text":"... Arguments passed VCA::VCAinference obj (object) class 'VCA' , alternatively, list 'VCA' objects, argument can specified vectors, -th vector element applies -th element 'obj' (see examples) alpha (numeric) value specifying significance level \\(100*(1-alpha)\\)% confidence intervals. total.claim (numeric) value specifying claim-value Chi-Squared test total variance (SD CV, see claim.type). error.claim (numeric) value specifying claim-value Chi-Squared test error variance (SD CV, see claim.type). claim.type (character) one \"VC\", \"SD\", \"CV\" specifying claim-values interpreted: \"VC\" (Default) = claim-value(s) specified terms variance(s), \"SD\" = claim-values specified terms standard deviations (SD), \"CV\" = claim-values specified terms coefficient(s) variation (CV) specified percentages. set \"SD\" \"CV\", claim-values converted variances applying Chi-Squared test (see examples). VarVC (logical) TRUE = element \"Matrices\" exists (see anovaVCA), covariance matrix estimated VCs computed (see vcovVC, used CIs intermediate VCs 'method.ci=\"sas\"'. Note, might take long larger datasets, since many matrix operations involved. FALSE (Default) = computing covariance matrix VCs omitted, well CIs intermediate VCs. excludeNeg (logical) TRUE = confidence intervals negative variance estimates reported. FALSE = confidence intervals VCs reported including negative VCs. See details section thorough explanation. constrainCI (logical) TRUE = CI-limits variance components constrained >= 0. FALSE = unconstrained CIs potentially negative CI-limits reported. preserve original width CIs. See details section thorough explanation. ci.method (character) string abbreviation specifying approach use computing confidence intervals variance components (VC). \"sas\" (default) uses Chi-Squared based CIs total error normal approximation VCs (Wald-limits, option \"NOBOUND\" SAS PROC MIXED); \"satterthwaite\" approximate DFs VC using Satterthwaite approach (see SattDF models fitted ANOVA) Cis based Chi-Squared distribution. approach conservative avoids negative values lower bounds. quiet (logical) TRUE = suppress warning, issued otherwise","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/VCAinference.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Inferential Statistics for VCA-Results — VCAinference","text":"object VCAinference contains series statistics.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/VCAinference.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Inferential Statistics for VCA-Results — VCAinference","text":"","code":"data(glucose) fit <- anovaVCA(value ~ day / run, glucose) VCAinference(fit) #> #> #> #> Inference from (V)ariance (C)omponent (A)nalysis #> ------------------------------------------------ #> #> > VCA Result: #> ------------- #> #> Name DF SS MS VC %Total SD CV[%] #> 1 total 64.7773 12.9336 100 3.5963 1.4727 #> 2 day 19 415.8 21.8842 1.9586 15.1432 1.3995 0.5731 #> 3 day:run 20 281 14.05 3.075 23.7754 1.7536 0.7181 #> 4 error 40 316 7.9 7.9 61.0814 2.8107 1.151 #> #> Mean: 244.2 (N = 80) #> #> Experimental Design: balanced | Method: ANOVA #> #> #> > VC: #> ----- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 12.9336 9.4224 18.8614 9.9071 17.7278 #> day 1.9586 #> day:run 3.0750 #> error 7.9000 5.3251 12.9333 5.6673 11.9203 #> #> > SD: #> ----- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 3.5963 3.0696 4.3430 3.1476 4.2104 #> day 1.3995 #> day:run 1.7536 #> error 2.8107 2.3076 3.5963 2.3806 3.4526 #> #> > CV[%]: #> -------- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 1.4727 1.257 1.7785 1.2889 1.7242 #> day 0.5731 #> day:run 0.7181 #> error 1.1510 0.945 1.4727 0.9749 1.4138 #> #> #> 95% Confidence Level #> SAS PROC MIXED method used for computing CIs #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/anovaVCA.html","id":null,"dir":"Reference","previous_headings":"","what":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","title":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","text":"copy VCA::anovaVCA VCA package","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/anovaVCA.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","text":"","code":"anovaVCA(...)"},{"path":"https://kaigu1990.github.io/mcradds/reference/anovaVCA.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","text":"... Arguments passed VCA::anovaVCA form (formula) specifying model fit, response variable left '~' mandatory Data (data.frame) containing variables referenced 'form' (factor, character) variable specifying groups analysis performed individually, .e. -processing NegVC (logical) FALSE = negative variance component estimates (VC) set 0 contribute total variance (done SAS PROC NESTED, conservative estimate total variance). original ANOVA estimates can found element 'VCoriginal'. degrees freedom total variance based adapted mean squares (MS), .e. adapted MS computed \\(D * VC\\), VC column vector negative VCs set 0. TRUE = negative variance component estimates set 0 contribute total variance (original definition total variance). VarVC.method (character) string specifying whether use algorithm given Searle et al. (1992) corresponds VarVC.method=\"scm\" Giesbrecht Burns (1985) can specified via \"gb\". Method \"scm\" (Searle, Casella, McCulloch) exact algorithm, \"gb\" (Giesbrecht, Burns) termed \"rough approximation\" authors, sufficiently exact compared e.g. SAS PROC MIXED (method=type1) uses inverse Fisher-Information matrix approximation. balanced designs methods give identical results, unbalanced designs differences occur. MME (logical) TRUE = (M)ixed (M)odel (E)quations solved, .e. 'VCA' object additional elements \"RandomEffects\", \"FixedEffects\", \"VarFixed\" (variance-covariance matrix fixed effects) \"Matrices\" element addional elements corresponding intermediate results solving MMEs. FALSE = solve MMEs, reduces computation time complex models significantly. quiet (logical) TRUE = suppress warning, issued otherwise order.data (logical) TRUE = class-variables ordered increasingly, FALSE = ordering class-variables remain ","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/anovaVCA.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","text":"class VCA downstream analysis.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/anovaVCA.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","text":"","code":"data(glucose) anovaVCA(value ~ day / run, glucose) #> #> #> Result Variance Component Analysis: #> ----------------------------------- #> #> Name DF SS MS VC %Total SD CV[%] #> 1 total 64.77732 12.933553 100 3.596325 1.472697 #> 2 day 19 415.8 21.884211 1.958553 15.143191 1.399483 0.573089 #> 3 day:run 20 281 14.05 3.075 23.77537 1.753568 0.718087 #> 4 error 40 316 7.9 7.9 61.081439 2.810694 1.15098 #> #> Mean: 244.2 (N = 80) #> #> Experimental Design: balanced | Method: ANOVA #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":null,"dir":"Reference","previous_headings":"","what":"AUC Test for Paired Two-sample Measurements — aucTest","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"function compares two AUC paired two-sample diagnostic assays standardized difference method, little difference SE calculation unpaired design. order compare two assays, function provides three assessments including 'difference', 'non-inferiority' 'superiority'. method comparing referred Liu(2006)'s article can found reference section .","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"","code":"aucTest( x, y, response, h0 = 0, conf.level = 0.95, method = c(\"difference\", \"non-inferiority\", \"superiority\"), ... )"},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"x (numeric) reference/standard diagnostic assay. y (numeric) test diagnostic assay. response (numeric factor) vector responses represent type classes, typically encoded 0(controls) 1(cases). h0 (numeric) specified hypothesized value margin two assays, default 0 difference method. select non-inferiority method, h0 negative value. select superiority method, non-negative value. conf.level (numeric) significance level 0 1 (non-inclusive) returned confidence interval. method (string) string specifying type hypothesis test, must one \"difference\" (default), \"non-inferiority\" \"superiority\". ... arguments passed pROC::roc().","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"RefInt object contains relevant results comparing paired ROC two-sample assays.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"samples considered independent, paired design, SE can computed method Delong provided pROC package. aucTest function use standardized difference approach Liu(2006) publication compute SE corresponding hypothesis test statistic paired design study. difference test difference two diagnostic tests, default h0 zero. non-inferiority test new diagnostic tests worse standard diagnostic test specific margin, time maybe safer, easier administer cost less. superiority test test new diagnostic tests better standard diagnostic test specific margin(default zero), better efficacy.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"test significance difference equal result EP24A2 Appendix D. Table D2. Table D2 uses method Hanley & McNeil (1982), whereas function uses method DeLong et al. (1988), results difference SE. Thus corresponding Z statistic P value equal well.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"Jen-Pei Liu (2006) \"Tests equivalence non-inferiority diagnostic accuracy based paired areas ROC curves\". Statist. Med. , 25:1219–1238. DOI: 10.1002/sim.2358.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"","code":"data(\"ldlroc\") # H0 : Difference between areas = 0: aucTest(x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing difference based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is difference to 0 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 3.0088 #> Pvalue: 0.002623 # H0 : Superiority margin <= 0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = \"superiority\", h0 = 0.1 ) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing superiority based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is superiority to 0.1 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 1.7436 #> Pvalue: 0.04061 # H0 : Non-inferiority margin <= -0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = \"non-inferiority\", h0 = -0.1 ) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing non-inferiority based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is non-inferiority to -0.1 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 4.2739 #> Pvalue: 9.606e-06"},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"Draw ggplot-based difference Bland-Altman plot reference assay vs. test assay BAsummary object, regression plot MCResult. Also Providing necessary useful option arguments presentation.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"","code":"autoplot(object, ...) # S4 method for BAsummary autoplot( object, type = c(\"absolute\", \"relative\"), color = \"black\", fill = \"lightgray\", size = 1.5, shape = 21, jitter = FALSE, ref.line = TRUE, ref.line.params = list(col = \"blue\", linetype = \"solid\", size = 1), ci.line = FALSE, ci.line.params = list(col = \"blue\", linetype = \"dashed\"), loa.line = TRUE, loa.line.params = list(col = \"blue\", linetype = \"dashed\"), label = TRUE, label.digits = 4, label.params = list(col = \"black\", size = 4), x.nbreak = NULL, y.nbreak = NULL, x.title = NULL, y.title = NULL, main.title = NULL ) # S4 method for MCR autoplot( object, color = \"black\", fill = \"lightgray\", size = 1.5, shape = 21, jitter = FALSE, identity = TRUE, identity.params = list(col = \"gray\", linetype = \"dashed\"), reg = TRUE, reg.params = list(col = \"blue\", linetype = \"solid\"), equal.axis = FALSE, legend.title = TRUE, legend.digits = 2, x.nbreak = NULL, y.nbreak = NULL, x.title = NULL, y.title = NULL, main.title = NULL )"},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"object (BAsummary, MCResult) input, depending function done, blandAltman() mcreg(). ... used. type (string) difference type input, default 'absolute'. color, fill (string) point colors. size (numeric) size points. shape (integer) ggplot shape points. jitter (logical) whether add small amount random variation location points. ref.line (logical) whether plot 'mean' line, default TRUE. ref.line.params, ci.line.params, loa.line.params (list) parameters (color, linetype, linewidth) argument 'ref.line', 'ci.line' 'loa.line'; eg. ref.line.params = list(col = \"blue\", linetype = \"solid\", linewidth = 1). ci.line (logical) whether plot confidence interval line 'mean', default FALSE. loa.line (logical) whether plot limit agreement line, default TRUE. label (logical) whether add specific value label line (ref.line, ci.line loa.line). shown line defined TRUE. label.digits (integer) number digits decimal point label. label.params (list) parameters (color, size, fontface) argument 'label'. x.nbreak, y.nbreak (integer) integer guiding number major breaks x-axis y-axis. x.title, y.title, main.title (string) x-axis, y-axis main title plot. identity (logical) whether add identity line, default TRUE. identity.params, reg.params (list) parameters (color, linetype) argument 'identity' 'reg'; eg. identity.params = list(col = \"gray\", linetype = \"dashed\"). reg (logical) whether add regression line slope intercept obtained mcr::mcreg() function, default TRUE. equal.axis (logical) whether adjust ranges x-axis y-axis identical. equal.axis = TRUE, x-axis equal y-axis. legend.title (logical) whether present title legend. legend.digits (integer) number digits decimal point legend.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"ggplot based Bland-Altman plot regression plot can easily customized using additional ggplot functions.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"like alter part autoplot function provided, adding ggplot statements suggested.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"","code":"# Specify the type for difference plot data(\"platelet\") object <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) autoplot(object) autoplot(object, type = \"relative\") # Set the addition parameters for `geom_point` autoplot(object, type = \"relative\", jitter = TRUE, fill = \"lightblue\", color = \"grey\", size = 2 ) # Set the color and line type for reference and limits of agreement lines autoplot(object, type = \"relative\", ref.line.params = list(col = \"red\", linetype = \"solid\"), loa.line.params = list(col = \"grey\", linetype = \"solid\") ) # Set label color, size and digits autoplot(object, type = \"absolute\", ref.line.params = list(col = \"grey\"), loa.line.params = list(col = \"grey\"), label.digits = 2, label.params = list(col = \"grey\", size = 3, fontface = \"italic\") ) # Add main title, X and Y axis titles, and adjust X ticks. autoplot(object, type = \"absolute\", x.nbreak = 6, main.title = \"Bland-Altman Plot\", x.title = \"Mean of Test and Reference Methods\", y.title = \"Reference - Test\" ) if (FALSE) { # Using the default arguments for regression plot data(\"platelet\") fit <- mcreg2( x = platelet$Comparative, y = platelet$Candidate, method.reg = \"Deming\", method.ci = \"jackknife\" ) autoplot(fit) # Only present the regression line and alter the color and shape. autoplot(fit, identity = FALSE, reg.params = list(col = \"grey\", linetype = \"dashed\"), legend.title = FALSE, legend.digits = 4 ) }"},{"path":"https://kaigu1990.github.io/mcradds/reference/blandAltman.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate Statistics for Bland-Altman — blandAltman","title":"Calculate Statistics for Bland-Altman — blandAltman","text":"Calculate Bland-Altman related statistics specific difference type, difference, limited agreement confidence interval. outlier detecting function graphic function get difference result .","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/blandAltman.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate Statistics for Bland-Altman — blandAltman","text":"","code":"blandAltman(x, y, sid = NULL, type1 = 3, type2 = 5, conf.level = 0.95)"},{"path":"https://kaigu1990.github.io/mcradds/reference/blandAltman.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate Statistics for Bland-Altman — blandAltman","text":"x (numeric) reference method. y (numeric) test method. sid (numeric string) sample id. type1 (integer) specifying specific difference absolute difference, default 3. type2 (integer) specifying specific difference relative difference, default 5. conf.level (numeric) significance level two side, default 0.95.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/blandAltman.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate Statistics for Bland-Altman — blandAltman","text":"object BAsummary class contains BlandAltman analysis. data data frame contains raw data input. stat list contains summary table (tab) Bland-Altman analysis, vector (absolute_diff) absolute difference vector (relative_diff) relative difference.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/blandAltman.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculate Statistics for Bland-Altman — blandAltman","text":"","code":"data(\"platelet\") blandAltman(x = platelet$Comparative, y = platelet$Candidate) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/(0.5*(X+Y)) #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.064 ( 0.145) #> Median 6.350 0.055 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.118) #> Min, Max (-47.800, 42.100) (-0.412, 0.667) #> Limit of Agreement (-24.011, 38.671) (-0.220, 0.347) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.038, 0.089) # with sample id as input sid blandAltman(x = platelet$Comparative, y = platelet$Candidate, sid = platelet$Sample) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate, #> sid = platelet$Sample) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/(0.5*(X+Y)) #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.064 ( 0.145) #> Median 6.350 0.055 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.118) #> Min, Max (-47.800, 42.100) (-0.412, 0.667) #> Limit of Agreement (-24.011, 38.671) (-0.220, 0.347) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.038, 0.089) # Specifiy the type for difference blandAltman(x = platelet$Comparative, y = platelet$Candidate, type1 = 1, type2 = 4) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate, #> type1 = 1, type2 = 4) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/X #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.078 ( 0.173) #> Median 6.350 0.056 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.125) #> Min, Max (-47.800, 42.100) (-0.341, 1.000) #> Limit of Agreement (-24.011, 38.671) (-0.261, 0.417) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.047, 0.109)"},{"path":"https://kaigu1990.github.io/mcradds/reference/calcium.html","id":null,"dir":"Reference","previous_headings":"","what":"Reference Interval Data — calcium","title":"Reference Interval Data — calcium","text":"example calcium can used compute reference range Calcium 240 medical students sex.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/calcium.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Reference Interval Data — calcium","text":"","code":"calcium"},{"path":"https://kaigu1990.github.io/mcradds/reference/calcium.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Reference Interval Data — calcium","text":"calcium data set contains 240 observations 3 variables. Sample Sample id Value Measurements target subjects Group Sex group target subjects","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/calcium.html","id":"source","dir":"Reference","previous_headings":"","what":"Source","title":"Reference Interval Data — calcium","text":"CLSI-EP28A3 Table 4. cited data set.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/cat_with_newline.html","id":null,"dir":"Reference","previous_headings":"","what":"Concatenate and Print with Newline — cat_with_newline","title":"Concatenate and Print with Newline — cat_with_newline","text":"function concatenates inputs like cat() prints newline.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/cat_with_newline.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Concatenate and Print with Newline — cat_with_newline","text":"","code":"cat_with_newline(...)"},{"path":"https://kaigu1990.github.io/mcradds/reference/cat_with_newline.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Concatenate and Print with Newline — cat_with_newline","text":"... inputs concatenate.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/cat_with_newline.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Concatenate and Print with Newline — cat_with_newline","text":"None, used side effect producing concatenated output R console.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/cat_with_newline.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Concatenate and Print with Newline — cat_with_newline","text":"","code":"cat_with_newline(\"hello\", \"world\") #> hello world"},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates Contingency Table — diagTab","title":"Creates Contingency Table — diagTab","text":"Creates 2x2 contingency table data frame matrix qualitative performance reader precision downstream analysis.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates Contingency Table — diagTab","text":"","code":"diagTab( formula = ~., data, bysort = NULL, dimname = NULL, levels = NULL, rep = FALSE, across = NULL )"},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates Contingency Table — diagTab","text":"formula (numeric) formula object cross-classifying variables (separated +) right hand side. data wide structure, row name contingency represented variable left + sign, col name right. data long structure, classified variable put left formula, value variable put right. data (data.frame matrix) data frame matrix. bysort (string) sorted variable col names data, grouped variable reproducibility analysis. dimname (vector) character vector define row name contingency table first variable, col name second variable. levels (vector) vector known levels measurements. rep (logical) whether implement reproducibility like reader precision . across (string) across variable split original data set subsets. -reader within-reader precision's across variable site commonly.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Creates Contingency Table — diagTab","text":"object matrix contains 2x2 contingency table.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Creates Contingency Table — diagTab","text":"attention like generate 2x2 contingency table reproducibility analysis, original data long structure using corresponding formula.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Creates Contingency Table — diagTab","text":"","code":"# For qualitative performance with wide data structure data(\"qualData\") qualData %>% diagTab(formula = ~ CandidateN + ComparativeN) #> Contingency Table: #> #> levels: 0 1 #> ComparativeN #> CandidateN 0 1 #> 0 54 16 #> 1 8 122 qualData %>% diagTab( formula = ~ CandidateN + ComparativeN, levels = c(1, 0) ) #> Contingency Table: #> #> levels: 1 0 #> ComparativeN #> CandidateN 1 0 #> 1 122 8 #> 0 16 54 # For qualitative performance with long data structure dummy <- data.frame( id = c(\"1001\", \"1001\", \"1002\", \"1002\", \"1003\", \"1003\"), value = c(1, 0, 0, 0, 1, 1), type = c(\"Test\", \"Ref\", \"Test\", \"Ref\", \"Test\", \"Ref\") ) dummy %>% diagTab( formula = type ~ value, bysort = \"id\", dimname = c(\"Test\", \"Ref\"), levels = c(1, 0) ) #> Contingency Table: #> #> levels: 1 0 #> Ref #> Test 1 0 #> 1 1 1 #> 0 0 1 # For Between-Reader precision performance data(\"PDL1RP\") reader <- PDL1RP$btw_reader reader %>% diagTab( formula = Reader ~ Value, bysort = \"Sample\", levels = c(\"Positive\", \"Negative\"), rep = TRUE, across = \"Site\" ) #> Contingency Table: #> #> levels: Positive Negative #> Pairwise2 #> Pairwise1 Positive Negative #> Positive 200 7 #> Negative 15 228"},{"path":"https://kaigu1990.github.io/mcradds/reference/dixon_outlier.html","id":null,"dir":"Reference","previous_headings":"","what":"Detect Dixon Outlier — dixon_outlier","title":"Detect Dixon Outlier — dixon_outlier","text":"Help function detects potential outlier Dixon method, following rules EP28A3 NMPA guideline establishment reference range.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/dixon_outlier.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Detect Dixon Outlier — dixon_outlier","text":"","code":"dixon_outlier(x)"},{"path":"https://kaigu1990.github.io/mcradds/reference/dixon_outlier.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Detect Dixon Outlier — dixon_outlier","text":"x (numeric) numeric input.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/dixon_outlier.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Detect Dixon Outlier — dixon_outlier","text":"list contains outliers vector without outliers.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/dixon_outlier.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Detect Dixon Outlier — dixon_outlier","text":"","code":"x <- c(13.6, 44.4, 45.9, 11.9, 41.9, 53.3, 44.7, 95.2, 44.1, 50.7, 45.2, 60.1, 89.1) dixon_outlier(x) #> $ord #> [1] 1 4 8 13 #> #> $out #> [1] 13.6 11.9 95.2 89.1 #> #> $subset #> [1] 44.4 45.9 41.9 53.3 44.7 44.1 50.7 45.2 60.1 #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/esd.critical.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute Critical Value for ESD Test — esd.critical","title":"Compute Critical Value for ESD Test — esd.critical","text":"helper function find lambda potential outliers iteration.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/esd.critical.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute Critical Value for ESD Test — esd.critical","text":"","code":"esd.critical(alpha, N, i)"},{"path":"https://kaigu1990.github.io/mcradds/reference/esd.critical.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute Critical Value for ESD Test — esd.critical","text":"alpha (numeric) type--risk, \\(\\alpha\\). N (integer) total number samples. (integer) iteration number, less number biggest potential outliers.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/esd.critical.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compute Critical Value for ESD Test — esd.critical","text":"lambda value calculated formula.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/esd.critical.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compute Critical Value for ESD Test — esd.critical","text":"","code":"esd.critical(alpha = 0.05, N = 100, i = 1) #> [1] 3.384083"},{"path":"https://kaigu1990.github.io/mcradds/reference/getAccuracy.html","id":null,"dir":"Reference","previous_headings":"","what":"Summary Method for MCTab Objects — getAccuracy","title":"Summary Method for MCTab Objects — getAccuracy","text":"Provides concise summary content MCTab objects. Computes sensitivity, specificity, positive negative predictive values positive negative likelihood ratios diagnostic test reference/gold standard. Computes positive/negative percent agreement, overall percent agreement new test evaluated comparison non-reference standard. Computes average positive/negative agreement tests reference, paired reader precision.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getAccuracy.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summary Method for MCTab Objects — getAccuracy","text":"","code":"getAccuracy(object, ...) # S4 method for MCTab getAccuracy( object, ref = c(\"r\", \"nr\", \"bnr\"), alpha = 0.05, r_ci = c(\"wilson\", \"wald\", \"clopper-pearson\"), nr_ci = c(\"wilson\", \"wald\", \"clopper-pearson\"), bnr_ci = \"bootstrap\", bootCI = c(\"perc\", \"norm\", \"basic\", \"stud\", \"bca\"), nrep = 1000, rng.seed = NULL, digits = 4, ... )"},{"path":"https://kaigu1990.github.io/mcradds/reference/getAccuracy.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summary Method for MCTab Objects — getAccuracy","text":"object (MCTab) input diagTab function create 2x2 contingency table. ... arguments passed DescTools::BinomCI. ref (character) reference condition. possible choose one condition require. r indicates comparative test standard reference, nr indicates comparative test standard reference, bnr indicates new test comparative test references. alpha (numeric) type--risk, \\(\\alpha\\). r_ci (string) string specifying method calculate confidence interval diagnostic test reference/gold standard. Default wilson. Options can wilson, wald clopper-pearson, see DescTools::BinomCI. nr_ci (string) string specifying method calculate confidence interval comparative test non-reference standard. Default wilson. Options can wilson, wald clopper-pearson, see DescTools::BinomCI. bnr_ci (string) string specifying method calculate confidence interval tests reference like reader precision. Default bootstrap. point estimate ANA APA equal 0 100%, method changed transformed wilson. bootCI (string) string specifying bootstrap confidence interval boot.ci() function boot package. Default perc(bootstrap percentile), options can norm(normal approximation), boot(basic bootstrap), stud(studentized bootstrap) bca(adjusted bootstrap percentile). nrep (integer) number replicates bootstrapping, default 1000. rng.seed (integer) number random number generator seed bootstrap sampling. set NULL currently R session used RNG setting used. digits (integer) desired number digits. Default 4.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getAccuracy.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summary Method for MCTab Objects — getAccuracy","text":"data frame contains qualitative diagnostic accuracy criteria three columns estimated value confidence interval. sens: Sensitivity refers often test positive condition interest present. spec: Specificity refers often test negative condition interest absent. ppv: Positive predictive value refers percentage subjects positive test result target condition. npv: Negative predictive value refers percentage subjects negative test result target condition. plr: Positive likelihood ratio refers probability true positive rate divided false negative rate. nlr: Negative likelihood ratio refers probability false positive rate divided true negative rate. ppa: Positive percent agreement, equals sensitivity candidate method evaluated comparison comparative method, reference/gold standard. npa: Negative percent agreement, equals specificity candidate method evaluated comparison comparative method, reference/gold standard. opa: Overall percent agreement. apa: Average positive agreement refers positive agreements can regarded weighted ppa. ana: Average negative agreement refers negative agreements can regarded weighted npa.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getAccuracy.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Summary Method for MCTab Objects — getAccuracy","text":"","code":"# For qualitative performance data(\"qualData\") tb <- qualData %>% diagTab( formula = ~ CandidateN + ComparativeN, levels = c(1, 0) ) getAccuracy(tb, ref = \"r\") #> EST LowerCI UpperCI #> sens 0.8841 0.8200 0.9274 #> spec 0.8710 0.7655 0.9331 #> ppv 0.9385 0.8833 0.9685 #> npv 0.7714 0.6605 0.8541 #> plr 6.8514 3.5785 13.1181 #> nlr 0.1331 0.0832 0.2131 getAccuracy(tb, ref = \"nr\", nr_ci = \"wilson\") #> EST LowerCI UpperCI #> ppa 0.8841 0.8200 0.9274 #> npa 0.8710 0.7655 0.9331 #> opa 0.8800 0.8277 0.9180 # For Between-Reader precision performance data(\"PDL1RP\") reader <- PDL1RP$btw_reader tb2 <- reader %>% diagTab( formula = Reader ~ Value, bysort = \"Sample\", levels = c(\"Positive\", \"Negative\"), rep = TRUE, across = \"Site\" ) getAccuracy(tb2, ref = \"bnr\") #> EST LowerCI UpperCI #> apa 0.9479 0.9246 0.9679 #> ana 0.9540 0.9328 0.9714 #> opa 0.9511 0.9289 0.9689 getAccuracy(tb2, ref = \"bnr\", rng.seed = 12306) #> EST LowerCI UpperCI #> apa 0.9479 0.9260 0.9686 #> ana 0.9540 0.9342 0.9730 #> opa 0.9511 0.9311 0.9711"},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":null,"dir":"Reference","previous_headings":"","what":"Detect Outliers From BAsummary Object — getOutlier","title":"Detect Outliers From BAsummary Object — getOutlier","text":"Detect potential outliers absolute relative differences BAsummary object 4E ESD method.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Detect Outliers From BAsummary Object — getOutlier","text":"","code":"getOutlier(object, ...) # S4 method for BAsummary getOutlier( object, method = c(\"ESD\", \"4E\"), difference = c(\"abs\", \"rel\"), alpha = 0.05, h = 5 )"},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Detect Outliers From BAsummary Object — getOutlier","text":"object (BAsummary) input blandAltman function generate Bland-Altman analysis result contains absolute relative differences. ... used. method (string) string specifying method use. Default ESD. difference (string) string specifying difference type use ESD method. Default abs means absolute difference, rel relative difference. alpha (numeric) type--risk. used method defined ESD. h (integer) positive integer indicating number suspected outliers. used method defined ESD.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Detect Outliers From BAsummary Object — getOutlier","text":"list contains statistics results (stat), outliers' ord id (ord), sample id (sid), matrix outliers (outmat) matrix without outliers (rmmat).","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Detect Outliers From BAsummary Object — getOutlier","text":"Bland-Altman analysis used input data regardless 4E ESD method necessary determine absolute relative differences beforehand. 4E method, absolute relative differences required define, bias exceeds 4 fold absolute relative differences. However ESD method, one necessary (latter recommended), bias needs meet ESD test.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Detect Outliers From BAsummary Object — getOutlier","text":"","code":"data(\"platelet\") # Using `blandAltman` function with default arguments ba <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) getOutlier(ba, method = \"ESD\", difference = \"rel\") #> $stat #> i Mean SD x Obs ESDi Lambda Outlier #> 1 1 0.06356753 0.1447540 0.6666667 1 4.166372 3.445148 TRUE #> 2 2 0.05849947 0.1342496 0.5783972 4 3.872621 3.442394 TRUE #> 3 3 0.05409356 0.1258857 0.5321101 2 3.797226 3.439611 TRUE #> 4 4 0.05000794 0.1183096 -0.4117647 10 3.903086 3.436800 TRUE #> 5 5 0.05398874 0.1106738 -0.3132530 14 3.318236 3.433961 FALSE #> 6 6 0.05718215 0.1056542 -0.2566372 23 2.970250 3.431092 FALSE #> #> $ord #> [1] 1 4 2 10 #> #> $sid #> [1] 1 4 2 10 #> #> $outmat #> sid x y #> 1 1 1.5 3.0 #> 2 2 4.0 6.9 #> 3 4 10.2 18.5 #> 4 10 16.4 10.8 #> #> $rmmat #> sid x y #> 1 3 9.2 8.0 #> 2 5 11.2 9.0 #> 3 6 12.4 13.0 #> 4 7 14.8 19.7 #> 5 8 14.8 16.0 #> 6 9 15.9 21.9 #> 7 11 17.6 22.6 #> 8 12 18.1 15.9 #> 9 13 18.1 20.0 #> 10 14 19.2 14.0 #> 11 15 19.6 25.9 #> 12 16 19.9 21.8 #> 13 17 20.4 24.5 #> 14 18 21.2 29.2 #> 15 19 22.0 27.0 #> 16 20 22.2 24.0 #> 17 21 23.4 25.8 #> 18 22 25.2 22.0 #> 19 23 25.5 19.7 #> 20 24 25.6 33.4 #> 21 25 26.3 30.0 #> 22 26 26.4 28.9 #> 23 27 27.5 34.3 #> 24 28 28.2 34.3 #> 25 29 30.3 35.8 #> 26 30 31.4 37.8 #> 27 31 32.9 37.1 #> 28 32 33.9 40.3 #> 29 33 34.3 37.1 #> 30 34 35.3 40.0 #> 31 35 38.4 42.2 #> 32 36 39.2 49.3 #> 33 37 48.2 41.0 #> 34 38 49.0 55.0 #> 35 39 51.3 55.0 #> 36 40 52.2 64.6 #> 37 41 60.2 54.8 #> 38 42 61.5 64.6 #> 39 43 78.0 78.6 #> 40 44 80.6 91.4 #> 41 45 84.4 65.7 #> 42 46 85.3 97.2 #> 43 47 89.0 100.0 #> 44 48 92.6 103.2 #> 45 49 94.9 89.6 #> 46 50 108.6 123.4 #> 47 51 110.4 115.0 #> 48 52 115.6 124.4 #> 49 53 116.9 138.1 #> 50 54 122.7 139.2 #> 51 55 143.6 166.8 #> 52 56 146.1 143.7 #> 53 57 146.2 150.8 #> 54 58 154.5 178.5 #> 55 59 161.7 183.4 #> 56 60 167.7 176.1 #> 57 61 176.6 173.7 #> 58 62 179.7 180.4 #> 59 63 188.9 198.9 #> 60 64 189.0 199.4 #> 61 65 197.9 211.1 #> 62 66 201.7 220.1 #> 63 67 207.7 218.3 #> 64 68 209.2 223.4 #> 65 69 210.5 196.8 #> 66 70 210.9 223.8 #> 67 71 214.1 232.2 #> 68 72 218.6 237.1 #> 69 73 232.9 247.9 #> 70 74 235.0 227.0 #> 71 75 237.8 235.3 #> 72 76 246.1 283.0 #> 73 77 252.6 263.5 #> 74 78 254.9 283.5 #> 75 79 261.4 272.3 #> 76 80 262.4 256.6 #> 77 81 270.1 289.2 #> 78 82 271.3 265.7 #> 79 83 273.5 264.5 #> 80 84 274.2 262.2 #> 81 85 281.1 271.1 #> 82 86 297.0 311.7 #> 83 87 298.7 296.5 #> 84 88 326.7 310.2 #> 85 89 327.1 362.1 #> 86 90 329.6 368.5 #> 87 91 332.8 370.6 #> 88 92 337.4 379.5 #> 89 93 340.1 358.3 #> 90 94 364.8 390.6 #> 91 95 370.1 408.4 #> 92 96 390.6 371.0 #> 93 97 395.7 431.7 #> 94 98 419.3 438.7 #> 95 99 421.3 382.3 #> 96 100 426.3 441.8 #> 97 101 440.4 455.6 #> 98 102 443.4 465.8 #> 99 103 446.2 416.4 #> 100 104 462.7 480.3 #> 101 105 467.7 470.7 #> 102 106 507.4 496.7 #> 103 107 568.3 595.9 #> 104 108 599.6 611.0 #> 105 109 613.8 622.3 #> 106 110 633.5 641.3 #> 107 111 678.6 717.5 #> 108 112 687.6 714.9 #> 109 113 695.1 647.3 #> 110 114 701.0 725.6 #> 111 115 708.3 729.5 #> 112 116 735.6 754.5 #> 113 117 794.8 768.5 #> 114 118 937.0 901.6 #> 115 119 1031.9 1068.0 #> 116 120 1239.3 1279.0 #> # Using sample id as input ba2 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate, sid = platelet$Sample) getOutlier(ba2, method = \"ESD\", difference = \"rel\") #> $stat #> i Mean SD x Obs ESDi Lambda Outlier #> 1 1 0.06356753 0.1447540 0.6666667 1 4.166372 3.445148 TRUE #> 2 2 0.05849947 0.1342496 0.5783972 4 3.872621 3.442394 TRUE #> 3 3 0.05409356 0.1258857 0.5321101 2 3.797226 3.439611 TRUE #> 4 4 0.05000794 0.1183096 -0.4117647 10 3.903086 3.436800 TRUE #> 5 5 0.05398874 0.1106738 -0.3132530 14 3.318236 3.433961 FALSE #> 6 6 0.05718215 0.1056542 -0.2566372 23 2.970250 3.431092 FALSE #> #> $ord #> [1] 1 4 2 10 #> #> $sid #> [1] \"ID1\" \"ID4\" \"ID2\" \"ID10\" #> #> $outmat #> sid x y #> 1 ID1 1.5 3 #> 2 ID2 4 6.9 #> 3 ID4 10.2 18.5 #> 4 ID10 16.4 10.8 #> #> $rmmat #> sid x y #> 1 ID3 9.2 8 #> 2 ID5 11.2 9 #> 3 ID6 12.4 13 #> 4 ID7 14.8 19.7 #> 5 ID8 14.8 16 #> 6 ID9 15.9 21.9 #> 7 ID11 17.6 22.6 #> 8 ID12 18.1 15.9 #> 9 ID13 18.1 20 #> 10 ID14 19.2 14 #> 11 ID15 19.6 25.9 #> 12 ID16 19.9 21.8 #> 13 ID17 20.4 24.5 #> 14 ID18 21.2 29.2 #> 15 ID19 22 27 #> 16 ID20 22.2 24 #> 17 ID21 23.4 25.8 #> 18 ID22 25.2 22 #> 19 ID23 25.5 19.7 #> 20 ID24 25.6 33.4 #> 21 ID25 26.3 30 #> 22 ID26 26.4 28.9 #> 23 ID27 27.5 34.3 #> 24 ID28 28.2 34.3 #> 25 ID29 30.3 35.8 #> 26 ID30 31.4 37.8 #> 27 ID31 32.9 37.1 #> 28 ID32 33.9 40.3 #> 29 ID33 34.3 37.1 #> 30 ID34 35.3 40 #> 31 ID35 38.4 42.2 #> 32 ID36 39.2 49.3 #> 33 ID37 48.2 41 #> 34 ID38 49 55 #> 35 ID39 51.3 55 #> 36 ID40 52.2 64.6 #> 37 ID41 60.2 54.8 #> 38 ID42 61.5 64.6 #> 39 ID43 78 78.6 #> 40 ID44 80.6 91.4 #> 41 ID45 84.4 65.7 #> 42 ID46 85.3 97.2 #> 43 ID47 89 100 #> 44 ID48 92.6 103.2 #> 45 ID49 94.9 89.6 #> 46 ID50 108.6 123.4 #> 47 ID51 110.4 115 #> 48 ID52 115.6 124.4 #> 49 ID53 116.9 138.1 #> 50 ID54 122.7 139.2 #> 51 ID55 143.6 166.8 #> 52 ID56 146.1 143.7 #> 53 ID57 146.2 150.8 #> 54 ID58 154.5 178.5 #> 55 ID59 161.7 183.4 #> 56 ID60 167.7 176.1 #> 57 ID61 176.6 173.7 #> 58 ID62 179.7 180.4 #> 59 ID63 188.9 198.9 #> 60 ID64 189 199.4 #> 61 ID65 197.9 211.1 #> 62 ID66 201.7 220.1 #> 63 ID67 207.7 218.3 #> 64 ID68 209.2 223.4 #> 65 ID69 210.5 196.8 #> 66 ID70 210.9 223.8 #> 67 ID71 214.1 232.2 #> 68 ID72 218.6 237.1 #> 69 ID73 232.9 247.9 #> 70 ID74 235 227 #> 71 ID75 237.8 235.3 #> 72 ID76 246.1 283 #> 73 ID77 252.6 263.5 #> 74 ID78 254.9 283.5 #> 75 ID79 261.4 272.3 #> 76 ID80 262.4 256.6 #> 77 ID81 270.1 289.2 #> 78 ID82 271.3 265.7 #> 79 ID83 273.5 264.5 #> 80 ID84 274.2 262.2 #> 81 ID85 281.1 271.1 #> 82 ID86 297 311.7 #> 83 ID87 298.7 296.5 #> 84 ID88 326.7 310.2 #> 85 ID89 327.1 362.1 #> 86 ID90 329.6 368.5 #> 87 ID91 332.8 370.6 #> 88 ID92 337.4 379.5 #> 89 ID93 340.1 358.3 #> 90 ID94 364.8 390.6 #> 91 ID95 370.1 408.4 #> 92 ID96 390.6 371 #> 93 ID97 395.7 431.7 #> 94 ID98 419.3 438.7 #> 95 ID99 421.3 382.3 #> 96 ID100 426.3 441.8 #> 97 ID101 440.4 455.6 #> 98 ID102 443.4 465.8 #> 99 ID103 446.2 416.4 #> 100 ID104 462.7 480.3 #> 101 ID105 467.7 470.7 #> 102 ID106 507.4 496.7 #> 103 ID107 568.3 595.9 #> 104 ID108 599.6 611 #> 105 ID109 613.8 622.3 #> 106 ID110 633.5 641.3 #> 107 ID111 678.6 717.5 #> 108 ID112 687.6 714.9 #> 109 ID113 695.1 647.3 #> 110 ID114 701 725.6 #> 111 ID115 708.3 729.5 #> 112 ID116 735.6 754.5 #> 113 ID117 794.8 768.5 #> 114 ID118 937 901.6 #> 115 ID119 1031.9 1068 #> 116 ID120 1239.3 1279 #> # Using `blandAltman` function when the `tyep2` is 2 with `X vs. (Y-X)/X` difference ba3 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate, type2 = 4) getOutlier(ba3, method = \"ESD\", difference = \"rel\") #> $stat #> i Mean SD x Obs ESDi Lambda Outlier #> 1 1 0.07824269 0.1730707 1.0000000 1 5.325900 3.445148 TRUE #> 2 2 0.07049683 0.1514810 0.8137255 4 4.906415 3.442394 TRUE #> 3 3 0.06419828 0.1355778 0.7250000 2 4.873967 3.439611 TRUE #> 4 4 0.05855040 0.1214221 -0.3414634 10 3.294407 3.436800 FALSE #> 5 5 0.06199880 0.1160523 -0.2708333 14 2.867950 3.433961 FALSE #> 6 6 0.06489299 0.1122769 0.3773585 18 2.782991 3.431092 FALSE #> #> $ord #> [1] 1 4 2 #> #> $sid #> [1] 1 4 2 #> #> $outmat #> sid x y #> 1 1 1.5 3.0 #> 2 2 4.0 6.9 #> 3 4 10.2 18.5 #> #> $rmmat #> sid x y #> 1 3 9.2 8.0 #> 2 5 11.2 9.0 #> 3 6 12.4 13.0 #> 4 7 14.8 19.7 #> 5 8 14.8 16.0 #> 6 9 15.9 21.9 #> 7 10 16.4 10.8 #> 8 11 17.6 22.6 #> 9 12 18.1 15.9 #> 10 13 18.1 20.0 #> 11 14 19.2 14.0 #> 12 15 19.6 25.9 #> 13 16 19.9 21.8 #> 14 17 20.4 24.5 #> 15 18 21.2 29.2 #> 16 19 22.0 27.0 #> 17 20 22.2 24.0 #> 18 21 23.4 25.8 #> 19 22 25.2 22.0 #> 20 23 25.5 19.7 #> 21 24 25.6 33.4 #> 22 25 26.3 30.0 #> 23 26 26.4 28.9 #> 24 27 27.5 34.3 #> 25 28 28.2 34.3 #> 26 29 30.3 35.8 #> 27 30 31.4 37.8 #> 28 31 32.9 37.1 #> 29 32 33.9 40.3 #> 30 33 34.3 37.1 #> 31 34 35.3 40.0 #> 32 35 38.4 42.2 #> 33 36 39.2 49.3 #> 34 37 48.2 41.0 #> 35 38 49.0 55.0 #> 36 39 51.3 55.0 #> 37 40 52.2 64.6 #> 38 41 60.2 54.8 #> 39 42 61.5 64.6 #> 40 43 78.0 78.6 #> 41 44 80.6 91.4 #> 42 45 84.4 65.7 #> 43 46 85.3 97.2 #> 44 47 89.0 100.0 #> 45 48 92.6 103.2 #> 46 49 94.9 89.6 #> 47 50 108.6 123.4 #> 48 51 110.4 115.0 #> 49 52 115.6 124.4 #> 50 53 116.9 138.1 #> 51 54 122.7 139.2 #> 52 55 143.6 166.8 #> 53 56 146.1 143.7 #> 54 57 146.2 150.8 #> 55 58 154.5 178.5 #> 56 59 161.7 183.4 #> 57 60 167.7 176.1 #> 58 61 176.6 173.7 #> 59 62 179.7 180.4 #> 60 63 188.9 198.9 #> 61 64 189.0 199.4 #> 62 65 197.9 211.1 #> 63 66 201.7 220.1 #> 64 67 207.7 218.3 #> 65 68 209.2 223.4 #> 66 69 210.5 196.8 #> 67 70 210.9 223.8 #> 68 71 214.1 232.2 #> 69 72 218.6 237.1 #> 70 73 232.9 247.9 #> 71 74 235.0 227.0 #> 72 75 237.8 235.3 #> 73 76 246.1 283.0 #> 74 77 252.6 263.5 #> 75 78 254.9 283.5 #> 76 79 261.4 272.3 #> 77 80 262.4 256.6 #> 78 81 270.1 289.2 #> 79 82 271.3 265.7 #> 80 83 273.5 264.5 #> 81 84 274.2 262.2 #> 82 85 281.1 271.1 #> 83 86 297.0 311.7 #> 84 87 298.7 296.5 #> 85 88 326.7 310.2 #> 86 89 327.1 362.1 #> 87 90 329.6 368.5 #> 88 91 332.8 370.6 #> 89 92 337.4 379.5 #> 90 93 340.1 358.3 #> 91 94 364.8 390.6 #> 92 95 370.1 408.4 #> 93 96 390.6 371.0 #> 94 97 395.7 431.7 #> 95 98 419.3 438.7 #> 96 99 421.3 382.3 #> 97 100 426.3 441.8 #> 98 101 440.4 455.6 #> 99 102 443.4 465.8 #> 100 103 446.2 416.4 #> 101 104 462.7 480.3 #> 102 105 467.7 470.7 #> 103 106 507.4 496.7 #> 104 107 568.3 595.9 #> 105 108 599.6 611.0 #> 106 109 613.8 622.3 #> 107 110 633.5 641.3 #> 108 111 678.6 717.5 #> 109 112 687.6 714.9 #> 110 113 695.1 647.3 #> 111 114 701.0 725.6 #> 112 115 708.3 729.5 #> 113 116 735.6 754.5 #> 114 117 794.8 768.5 #> 115 118 937.0 901.6 #> 116 119 1031.9 1068.0 #> 117 120 1239.3 1279.0 #> # Using \"4E\" as the method input ba4 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) getOutlier(ba4, method = \"4E\") #> No outlier is detected."},{"path":"https://kaigu1990.github.io/mcradds/reference/glucose.html","id":null,"dir":"Reference","previous_headings":"","what":"Inermediate Precision Data — glucose","title":"Inermediate Precision Data — glucose","text":"data set consists Glucose intermediate precision data CLSI EP05-A3 guideline.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/glucose.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Inermediate Precision Data — glucose","text":"","code":"glucose"},{"path":"https://kaigu1990.github.io/mcradds/reference/glucose.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Inermediate Precision Data — glucose","text":"glucose data set contains 80 observations 3 variables. day day number run run number value measurement value","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/glucose.html","id":"source","dir":"Reference","previous_headings":"","what":"Source","title":"Inermediate Precision Data — glucose","text":"CLSI-EP05A3 Table A1. Glucose Precision Evaluation Measurements (mg/dL) cited data set.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/glucose.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Inermediate Precision Data — glucose","text":"EP05A3: Evaluation Precision Quantitative Measurement Procedures.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_difference.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute Difference for Bland-Altman — h_difference","title":"Compute Difference for Bland-Altman — h_difference","text":"Helper function computes difference specific type.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_difference.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute Difference for Bland-Altman — h_difference","text":"","code":"h_difference(x, y, type)"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_difference.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute Difference for Bland-Altman — h_difference","text":"x (numeric) reference method. y (numeric) test method. type (integer) integer specifying specific difference Bland-Altman (default 3). Possible choices : 1 - difference X vs. Y-X (absolute differences). 2 - difference X vs. (Y-X)/X (relative differences). 3 - difference 0.5*(X+Y) vs. Y-X (absolute differences). 4 - difference 0.5*(X+Y) vs. (Y-X)/X (relative differences). 5 - difference 0.5*(X+Y) vs. (Y-X)/(0.5*(X+Y)) (relative differences).","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_difference.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compute Difference for Bland-Altman — h_difference","text":"matrix contains x y measurement data corresponding difference.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_difference.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compute Difference for Bland-Altman — h_difference","text":"","code":"h_difference(x = c(1.1, 1.2, 1.5), y = c(1.2, 1.3, 1.4), type = 5) #> x y x_ba y_ba #> [1,] 1.1 1.2 1.15 0.08695652 #> [2,] 1.2 1.3 1.25 0.08000000 #> [3,] 1.5 1.4 1.45 -0.06896552"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_factor.html","id":null,"dir":"Reference","previous_headings":"","what":"Factor Variable Per Levels — h_factor","title":"Factor Variable Per Levels — h_factor","text":"Helper function factor inputs order appearance, per levels provide.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_factor.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Factor Variable Per Levels — h_factor","text":"","code":"h_factor(df, var, levels = NULL, ...)"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_factor.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Factor Variable Per Levels — h_factor","text":"df (data.frame) input data. var (string) variable factor. levels (vector) character vector known levels. ... arguments passed factor().","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_factor.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Factor Variable Per Levels — h_factor","text":"factor variable","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_factor.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Factor Variable Per Levels — h_factor","text":"","code":"df <- data.frame(a = c(\"aa\", \"a\", \"aa\")) h_factor(df, var = \"a\") #> [1] aa a aa #> Levels: a aa h_factor(df, var = \"a\", levels = c(\"aa\", \"a\")) #> [1] aa a aa #> Levels: aa a"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_est.html","id":null,"dir":"Reference","previous_headings":"","what":"Format and Concatenate to String — h_fmt_est","title":"Format and Concatenate to String — h_fmt_est","text":"Help function format numeric data strings concatenate single character.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_est.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Format and Concatenate to String — h_fmt_est","text":"","code":"h_fmt_est(num1, num2, digits = c(2, 2), width = c(6, 6))"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_est.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Format and Concatenate to String — h_fmt_est","text":"num1 (numeric) first numeric input. num2 (numeric) second numeric input. digits (integer) desired number digits decimal point. width (integer) total field width.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_est.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Format and Concatenate to String — h_fmt_est","text":"single character.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_est.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Format and Concatenate to String — h_fmt_est","text":"","code":"h_fmt_est(num1 = 3.14, num2 = 3.1415, width = c(4, 4)) #> [1] \"3.14 (3.14)\""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_num.html","id":null,"dir":"Reference","previous_headings":"","what":"Format Numeric Data — h_fmt_num","title":"Format Numeric Data — h_fmt_num","text":"Help function format numeric data formatC function.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_num.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Format Numeric Data — h_fmt_num","text":"","code":"h_fmt_num(x, digits, width = digits + 4)"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_num.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Format Numeric Data — h_fmt_num","text":"x (numeric) numeric input. digits (integer) desired number digits decimal point (format = \"f\"). width (integer) total field width.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_num.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Format Numeric Data — h_fmt_num","text":"character object specific digits width.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_num.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Format Numeric Data — h_fmt_num","text":"","code":"h_fmt_num(pi * 10^(-2:2), digits = 2, width = 6) #> [1] \" 0.03\" \" 0.31\" \" 3.14\" \" 31.42\" \"314.16\""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_range.html","id":null,"dir":"Reference","previous_headings":"","what":"Format and Concatenate to Range — h_fmt_range","title":"Format and Concatenate to Range — h_fmt_range","text":"Help function format numeric data strings concatenate single character range.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_range.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Format and Concatenate to Range — h_fmt_range","text":"","code":"h_fmt_range(num1, num2, digits = c(2, 2), width = c(6, 6))"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_range.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Format and Concatenate to Range — h_fmt_range","text":"num1 (numeric) first numeric input. num2 (numeric) second numeric input. digits (integer) desired number digits decimal point. width (integer) total field width.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_range.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Format and Concatenate to Range — h_fmt_range","text":"single character.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_range.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Format and Concatenate to Range — h_fmt_range","text":"","code":"h_fmt_range(num1 = 3.14, num2 = 3.14, width = c(4, 4)) #> [1] \"(3.14, 3.14)\""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_summarize.html","id":null,"dir":"Reference","previous_headings":"","what":"Summarize Basic Statistics — h_summarize","title":"Summarize Basic Statistics — h_summarize","text":"Help function summarizes statistics needed.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_summarize.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summarize Basic Statistics — h_summarize","text":"","code":"h_summarize(x, conf.level = 0.95)"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_summarize.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summarize Basic Statistics — h_summarize","text":"x (numeric) input numeric vector. conf.level (numeric) significance level, default 0.95.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_summarize.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summarize Basic Statistics — h_summarize","text":"verctor contains several statistics, n, mean, median, min, max, q25, q75, sd, se, limit agreement limit confidence interval .","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_summarize.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Summarize Basic Statistics — h_summarize","text":"","code":"h_summarize(1:50) #> n mean median min max q1 q3 sd se limit_lr limit_ur #> [1,] 50 25.5 25.5 1 50 13.25 37.75 14.57738 2.061553 -3.071139 54.07114 #> ci_lr ci_ur #> [1,] 21.45943 29.54057"},{"path":"https://kaigu1990.github.io/mcradds/reference/ldlroc.html","id":null,"dir":"Reference","previous_headings":"","what":"Two-sampled Paired Test Data — ldlroc","title":"Two-sampled Paired Test Data — ldlroc","text":"data set consists measurements low-density lipoprotein (LDL), oxidized low-density lipoprotein (OxLDL) corresponding diagnosis. OxLDL thought active molecule process atherosclerosis, proponents believe serum concentration provide accurate risk stratification traditional LDL assay.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ldlroc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Two-sampled Paired Test Data — ldlroc","text":"","code":"ldlroc"},{"path":"https://kaigu1990.github.io/mcradds/reference/ldlroc.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Two-sampled Paired Test Data — ldlroc","text":"ldlroc data set contains 50 observations 3 variables. Diagnosis diagnosis, 1 represents subject disease condition interest present, 0 absent OxLDL oxidized low-density lipoprotein(OxLDL) measurement value LDL low-density lipoprotein(LDL) measurement value","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ldlroc.html","id":"source","dir":"Reference","previous_headings":"","what":"Source","title":"Two-sampled Paired Test Data — ldlroc","text":"CLSI-EP24A2 Table D1. OxLDL LDL Assay Values (U/L) 50 Subjects.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ldlroc.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Two-sampled Paired Test Data — ldlroc","text":"EP24A2 Assessment Diagnostic Accuracy Laboratory Tests Using Receiver Operating Characteristic Curves.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/mcradds-package.html","id":null,"dir":"Reference","previous_headings":"","what":"mcradds Package — mcradds-package","title":"mcradds Package — mcradds-package","text":"mcradds Processing analyzing Vitro Diagnostic Data.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/mcradds-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"mcradds Package — mcradds-package","text":"Maintainer: Kai Gu gukai1212@163.com [copyright holder]","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRI.html","id":null,"dir":"Reference","previous_headings":"","what":"Nonparametric Method in Calculation of Reference Interval — nonparRI","title":"Nonparametric Method in Calculation of Reference Interval — nonparRI","text":"nonparametric method used calculate reference interval distribution skewed sample size 120 observations.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRI.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Nonparametric Method in Calculation of Reference Interval — nonparRI","text":"","code":"nonparRI(x, ind = 1:length(x), conf.level = 0.95)"},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRI.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Nonparametric Method in Calculation of Reference Interval — nonparRI","text":"x (numeric) numeric measurements target population. ind (integer) integer vector boot process, default elements x. conf.level (numeric) percentile reference limit.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRI.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Nonparametric Method in Calculation of Reference Interval — nonparRI","text":"vector nonparametric reference interval","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRI.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Nonparametric Method in Calculation of Reference Interval — nonparRI","text":"","code":"data(\"calcium\") x <- calcium$Value nonparRI(x) #> 2.5% 97.5% #> 9.1 10.3"},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRanks.html","id":null,"dir":"Reference","previous_headings":"","what":"Nonparametric Rank Number of Reference Interval — nonparRanks","title":"Nonparametric Rank Number of Reference Interval — nonparRanks","text":"data shows rank number computing confidence interval nonparametric reference limit samples within 119-1000 values. reference interval must 95% confidence interval 90%.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRanks.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Nonparametric Rank Number of Reference Interval — nonparRanks","text":"","code":"nonparRanks"},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRanks.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Nonparametric Rank Number of Reference Interval — nonparRanks","text":"nonparRanks data set contains 882 observations 3 variables. SampleSize sample size Lower lower rank Upper upper rank","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRanks.html","id":"source","dir":"Reference","previous_headings":"","what":"Source","title":"Nonparametric Rank Number of Reference Interval — nonparRanks","text":"CLSI-EP28A3 Table 8. cited data set.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRanks.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Nonparametric Rank Number of Reference Interval — nonparRanks","text":"EP28-A3c: Defining, Establishing, Verifying Reference Intervals Clinical Laboratory.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":null,"dir":"Reference","previous_headings":"","what":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"Adjust cor.test function can define specific H0 per request, based Fisher's Z transformation correlation.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"","code":"pearsonTest( x, y, h0 = 0, conf.level = 0.95, alternative = c(\"two.sided\", \"less\", \"greater\"), ... )"},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"x (numeric) one measurement. y (numeric) another measurement. h0 (numeric) specified hypothesized value difference two correlations, default 0. conf.level (numeric) significance level returned confidence interval hypothesis. alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". ... arguments passed cor.test().","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"named vector contains correlation coefficient (cor), confidence interval(lowerci upperci), Z statistic (Z) p-value (pval)","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"NCSS correlation document","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"","code":"x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) pearsonTest(x, y, h0 = 0.5, alternative = \"greater\") #> $stat #> cor lowerci upperci Z pval #> 0.5711816 -0.1497426 0.8955795 0.2448722 0.4032777 #> #> $method #> [1] \"Pearson's correlation\" #> #> $conf.level #> [1] 0.95 #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/pipe.html","id":null,"dir":"Reference","previous_headings":"","what":"Pipe operator — %>%","title":"Pipe operator — %>%","text":"See magrittr::%>% details.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pipe.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Pipe operator — %>%","text":"","code":"lhs %>% rhs"},{"path":"https://kaigu1990.github.io/mcradds/reference/pipe.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Pipe operator — %>%","text":"lhs value magrittr placeholder. rhs function call using magrittr semantics.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pipe.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Pipe operator — %>%","text":"result calling rhs(lhs).","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/platelet.html","id":null,"dir":"Reference","previous_headings":"","what":"Quantitative Measurement Data — platelet","title":"Quantitative Measurement Data — platelet","text":"example platelet can used create data set comparing Platelet results two analyzers cells.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/platelet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Quantitative Measurement Data — platelet","text":"","code":"platelet"},{"path":"https://kaigu1990.github.io/mcradds/reference/platelet.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Quantitative Measurement Data — platelet","text":"platelet data set contains 120 observations 3 variables. Sample Sample id Comparative Measurements comparative analyzer Candidate Measurements candidate analyzer","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/platelet.html","id":"source","dir":"Reference","previous_headings":"","what":"Source","title":"Quantitative Measurement Data — platelet","text":"CLSI-EP09 A3 Appendix H, Table H2 cited data set.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/qualData.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulated Qualitative Data — qualData","title":"Simulated Qualitative Data — qualData","text":"simulated data qualData can used calculate qualitative performance sensitivity specificity.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/qualData.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulated Qualitative Data — qualData","text":"","code":"qualData"},{"path":"https://kaigu1990.github.io/mcradds/reference/qualData.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Simulated Qualitative Data — qualData","text":"qualData data set contains 200 observations 3 variables. Sample Sample id ComparativeN Measurements comparative analyzer 1=positive 0=negative CandidateN Measurements candidate analyzer 1=positive 0=negative","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"function used establish reference interval target population parametric, non-parametric robust methods follows CLSI-EP28A3 NMPA guideline. additional, also provides corresponding confidence interval lower/upper reference limit needed. Given outliers identified beforehand, Tukey Dixon methods can applied depending distribution data.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"","code":"refInterval( x, out_method = c(\"doxin\", \"tukey\"), out_rm = FALSE, RI_method = c(\"parametric\", \"nonparametric\", \"robust\"), CI_method = c(\"parametric\", \"nonparametric\", \"boot\"), refLevel = 0.95, bootCI = c(\"perc\", \"norm\", \"basic\", \"stud\", \"bca\"), confLevel = 0.9, rng.seed = NULL, tol = 1e-06, R = 10000 )"},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"x (numeric) numeric measurements target population. out_method (string) string specifying outlier detection use. out_rm (logical) whether outliers removed . RI_method (string) string specifying method computing reference interval use. Default parametric, options can nonparametric robust. CI_method (string) string specifying method computing confidence interval reference limit(lower upper) use. Default parametric, options can nonparametric boot. refLevel (numeric) reference range/interval, usual 0.95. bootCI (string) string specifying bootstrap confidence interval boot.ci() function boot package. Default perc(bootstrap percentile), options can norm(normal approximation), boot(basic bootstrap), stud(studentized bootstrap) bca(adjusted bootstrap percentile). confLevel (numeric) significance level confidence interval reference limit. rng.seed (integer) number random number generator seed bootstrap sampling. set NULL currently R session used RNG setting used. tol (numeric) tolerance iterative process can stopped robust method. R (integer) number bootstrap replicates, used boot() function.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"RefInt object contains relevant results establishing reference interval.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"conditions use aware : parametric method used calculate reference interval, confidence interval method well. non-parametric method used calculate reference interval sample size 120 observations, non-parametric suggested confidence interval. Otherwise sample size 120, bootstrap method better choice. Beside non-parametric method confidence interval allows refLevel=0.95 confLevel=0.9 arguments, bootstrap methods used automatically. robust method used calculate reference interval, method confidence interval must bootstrap.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"","code":"data(\"calcium\") x <- calcium$Value refInterval(x, RI_method = \"parametric\", CI_method = \"parametric\") #> #> Reference Interval Method: parametric, Confidence Interval Method: parametric #> #> Call: refInterval(x = x, RI_method = \"parametric\", CI_method = \"parametric\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.05, 10.32 #> RefLower Confidence Interval: 8.9926, 9.1100 #> Refupper Confidence Interval: 10.2584, 10.3757 refInterval(x, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> Reference Interval Method: nonparametric, Confidence Interval Method: nonparametric #> #> Call: refInterval(x = x, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.10, 10.30 #> RefLower Confidence Interval: 8.9000, 9.2000 #> Refupper Confidence Interval: 10.3000, 10.4000 refInterval(x, RI_method = \"robust\", CI_method = \"boot\", R = 1000) #> [1] \"Bootstrape process could take a short while.\" #> #> Reference Interval Method: robust, Confidence Interval Method: boot #> #> Call: refInterval(x = x, RI_method = \"robust\", CI_method = \"boot\", #> R = 1000) #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.04, 10.32 #> RefLower Confidence Interval: 8.9777, 9.0979 #> Refupper Confidence Interval: 10.2568, 10.3751"},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":null,"dir":"Reference","previous_headings":"","what":"Robust Method in Calculation of Reference Interval — robustRI","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"robust method used calculate reference interval small sample size (120 observations).","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"","code":"robustRI(x, ind = 1:length(x), conf.level = 0.95, tol = 1e-06)"},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"x (numeric) numeric measurements target population. ind (integer) integer vector boot process, default elements x. conf.level (numeric) significance level internal t statistic. tol (numeric) tolerance iterative process can stopped.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"vector robust reference interval","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"robust algorithm referring CLSI document EP28A3.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"","code":"# This example data is taken from EP28A3 Appendix B. to ensure the result is in accordance. x <- c(8.9, 9.2, rep(9.4, 2), rep(9.5, 3), rep(9.6, 4), rep(9.7, 5), 9.8, rep(9.9, 2), 10.2) robustRI(x) #> [1] 9.049545 10.199396"},{"path":"https://kaigu1990.github.io/mcradds/reference/samplesize-class.html","id":null,"dir":"Reference","previous_headings":"","what":"SampleSize Class — SampleSize-class","title":"SampleSize Class — SampleSize-class","text":"SampleSize class serves store results parameters sample size calculation.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/samplesize-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"SampleSize Class — SampleSize-class","text":"","code":"SampleSize(call, method, n, param)"},{"path":"https://kaigu1990.github.io/mcradds/reference/samplesize-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"SampleSize Class — SampleSize-class","text":"call (call) function call. method (character) method name. n (numeric) number sample size. param (list) list relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/samplesize-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"SampleSize Class — SampleSize-class","text":"object class SampleSize.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/samplesize-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"SampleSize Class — SampleSize-class","text":"call call method method n n param param","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/show.html","id":null,"dir":"Reference","previous_headings":"","what":"Show Method for Objects — show,SampleSize-method","title":"Show Method for Objects — show,SampleSize-method","text":"show method displays essential information objects.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/show.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Show Method for Objects — show,SampleSize-method","text":"","code":"# S4 method for SampleSize show(object) # S4 method for MCTab show(object) # S4 method for BAsummary show(object) # S4 method for RefInt show(object) # S4 method for tpROC show(object)"},{"path":"https://kaigu1990.github.io/mcradds/reference/show.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Show Method for Objects — show,SampleSize-method","text":"object () input.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/show.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Show Method for Objects — show,SampleSize-method","text":"None (invisible NULL), used side effect printing console.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/show.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Show Method for Objects — show,SampleSize-method","text":"","code":"# Sample zie calculation size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8) #> #> Sample size determination for one Proportion #> #> Call: size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8) #> #> optimal sample size: n = 239 #> #> p1:0.95 p0:0.9 alpha:0.05 power:0.8 alternative:two.sided size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> Sample size determination for a Given Lower Confidence Interval of Pearson's Correlation #> #> Call: size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> optimal sample size: n = 86 #> #> r:0.9 lr:0.85 alpha:0.025 interval:c(10, 1e+05) tol:1e-05 alternative:greater # Get 2x2 Contingency Table qualData %>% diagTab(formula = ~ CandidateN + ComparativeN) #> Contingency Table: #> #> levels: 0 1 #> ComparativeN #> CandidateN 0 1 #> 0 54 16 #> 1 8 122 # Bland-Altman analysis data(\"platelet\") blandAltman(x = platelet$Comparative, y = platelet$Candidate) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/(0.5*(X+Y)) #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.064 ( 0.145) #> Median 6.350 0.055 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.118) #> Min, Max (-47.800, 42.100) (-0.412, 0.667) #> Limit of Agreement (-24.011, 38.671) (-0.220, 0.347) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.038, 0.089) # Reference Interval data(\"calcium\") refInterval(x = calcium$Value, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> Reference Interval Method: nonparametric, Confidence Interval Method: nonparametric #> #> Call: refInterval(x = calcium$Value, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.10, 10.30 #> RefLower Confidence Interval: 8.9000, 9.2000 #> Refupper Confidence Interval: 10.3000, 10.4000 # Comparing the Paired ROC when Non-inferiority margin <= -0.1 data(\"ldlroc\") aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = \"non-inferiority\", h0 = -0.1 ) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing non-inferiority based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is non-inferiority to -0.1 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 4.2739 #> Pvalue: 9.606e-06"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"function performs sample size computation testing Pearson's correlation lower confidence interval provided.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"","code":"size_ci_corr( r, lr, alpha = 0.05, interval = c(10, 1e+05), tol = 1e-05, alternative = c(\"two.sided\", \"less\", \"greater\") )"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"r (numeric) expected correlation coefficient evaluated assay. lr (numeric) acceptable correlation coefficient evaluated assay. alpha (numeric) type--risk, \\(\\alpha\\). interval (numeric) numeric vector containing end-points interval searched root(sample size). defaults set c(1, 100000). tol (numeric) tolerance searching root(sample size). alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\".","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"object size class contains sample size relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"Fisher (1973, p. 199).","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"","code":"size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> Sample size determination for a Given Lower Confidence Interval of Pearson's Correlation #> #> Call: size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> optimal sample size: n = 86 #> #> r:0.9 lr:0.85 alpha:0.025 interval:c(10, 1e+05) tol:1e-05 alternative:greater"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"function performs sample size computation testing given lower confidence interval one proportion using Simple Asymptotic(Wald), Wilson score, clopper-pearson methods.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"","code":"size_ci_one_prop( p, lr, alpha = 0.05, interval = c(1, 1e+05), tol = 1e-05, alternative = c(\"two.sided\", \"less\", \"greater\"), method = c(\"simple-asymptotic\", \"wilson\", \"wald\", \"clopper-pearson\") )"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"p (numeric) expected criteria evaluated assay. lr (numeric) acceptable criteria evaluated assay. alpha (numeric) type--risk, \\(\\alpha\\). interval (numeric) numeric vector containing end-points interval searched root(sample size). defaults set c(1, 100000). tol (numeric) tolerance searching root(sample size). alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". method (string) string specifying method use. Simple Asymptotic default, equal Wald. Options can \"wilson\", \"clopper-pearson\" method, see DescTools::BinomCIn","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"object size class contains sample size relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"Newcombe, R. G. 1998. 'Two-Sided Confidence Intervals Single Proportion: Comparison Seven Methods.' Statistics Medicine, 17, pp. 857-872.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"","code":"size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wilson\") #> #> Sample size determination for a Given Lower Confidence Interval #> #> Call: size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wilson\") #> #> optimal sample size: n = 246 #> #> p:0.85 lr:0.8 alpha:0.05 interval:c(1, 1e+05) tol:1e-05 alternative:two.sided method:wilson size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"simple-asymptotic\") #> #> Sample size determination for a Given Lower Confidence Interval #> #> Call: size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"simple-asymptotic\") #> #> optimal sample size: n = 196 #> #> p:0.85 lr:0.8 alpha:0.05 interval:c(1, 1e+05) tol:1e-05 alternative:two.sided method:simple-asymptotic size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wald\") #> #> Sample size determination for a Given Lower Confidence Interval #> #> Call: size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wald\") #> #> optimal sample size: n = 196 #> #> p:0.85 lr:0.8 alpha:0.05 interval:c(1, 1e+05) tol:1e-05 alternative:two.sided method:wald"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample Size for Testing Pearson's correlation — size_corr","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"function performs sample size computation testing Pearson's correlation, using uses Fisher's classic z-transformation normalize distribution Pearson's correlation coefficient.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"","code":"size_corr( r1, r0, alpha = 0.05, power = 0.8, alternative = c(\"two.sided\", \"less\", \"greater\") )"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"r1 (numeric) expected correlation coefficient evaluated assay. r0 (numeric) acceptable correlation coefficient evaluated assay. alpha (numeric) type--risk, \\(\\alpha\\). power (numeric) Power test, equal 1 minus type-II-risk (\\(\\beta\\)). alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\".","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"object size class contains sample size relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"Fisher (1973, p. 199).","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"","code":"size_corr(r1 = 0.95, r0 = 0.9, alpha = 0.025, power = 0.8, alternative = \"greater\") #> #> Sample size determination for testing Pearson's Correlation #> #> Call: size_corr(r1 = 0.95, r0 = 0.9, alpha = 0.025, power = 0.8, alternative = \"greater\") #> #> optimal sample size: n = 64 #> #> r1:0.95 r0:0.9 alpha:0.025 power:0.8 alternative:greater"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample Size for Testing One Proportion — size_one_prop","title":"Sample Size for Testing One Proportion — size_one_prop","text":"function performs sample size computation testing one proportion accordance Chinese NMPA's IVD guideline.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample Size for Testing One Proportion — size_one_prop","text":"","code":"size_one_prop( p1, p0, alpha = 0.05, power = 0.8, alternative = c(\"two.sided\", \"less\", \"greater\") )"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample Size for Testing One Proportion — size_one_prop","text":"p1 (numeric) expected criteria evaluated assay. p0 (numeric) acceptable criteria evaluated assay. alpha (numeric) type--risk, \\(\\alpha\\). power (numeric) Power test, equal 1 minus type-II-risk (\\(\\beta\\)). alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\".","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Sample Size for Testing One Proportion — size_one_prop","text":"object size class contains sample size relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Sample Size for Testing One Proportion — size_one_prop","text":"Chinese NMPA's IVD technical guideline.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sample Size for Testing One Proportion — size_one_prop","text":"","code":"size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8) #> #> Sample size determination for one Proportion #> #> Call: size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8) #> #> optimal sample size: n = 239 #> #> p1:0.95 p0:0.9 alpha:0.05 power:0.8 alternative:two.sided"},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":null,"dir":"Reference","previous_headings":"","what":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"Providing confidence interval Spearman's rank correlation Bootstrap, define specific H0 per request, based Fisher's Z transformation correlation variance recommended Bonett Wright (2000), Pearson's.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"","code":"spearmanTest( x, y, h0 = 0, conf.level = 0.95, alternative = c(\"two.sided\", \"less\", \"greater\"), nrep = 1000, rng.seed = NULL, ... )"},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"x (numeric) one measurement. y (numeric) another measurement. h0 (numeric) specified hypothesized value difference two correlations, default 0. conf.level (numeric) significance level returned confidence interval hypothesis. alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". nrep (integer) number replicates bootstrapping, default 1000. rng.seed (integer) number random number generator seed bootstrap sampling. set NULL currently R session used RNG setting used. ... arguments passed cor.test().","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"named vector contains correlation coefficient (cor), confidence interval(lowerci upperci), Z statistic (Z) p-value (pval)","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"NCSS correlation document","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"","code":"x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) spearmanTest(x, y, h0 = 0.5, alternative = \"greater\") #> $stat #> cor lowerci upperci Z pval #> 0.6000000 -0.1581140 0.9765538 0.3243526 0.3728355 #> #> $method #> [1] \"Spearman's correlation\" #> #> $conf.level #> [1] 0.95 #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/tpROC-class.html","id":null,"dir":"Reference","previous_headings":"","what":"Test for Paired ROC Class — tpROC-class","title":"Test for Paired ROC Class — tpROC-class","text":"tpROC class serves store results testing AUC paired two-sample assays.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tpROC-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Test for Paired ROC Class — tpROC-class","text":"","code":"tpROC(testROC, refROC, method, H0, stat)"},{"path":"https://kaigu1990.github.io/mcradds/reference/tpROC-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Test for Paired ROC Class — tpROC-class","text":"testROC (list) object pRPC::roc() function test assay. refROC (list) object pRPC::roc() function reference/standard assay. method (character) method hypothesis test. H0 (numeric) margin test. stat (list) list contains difference comparing results, difference AUC, standard error, confidence interval, Z statistic P value.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tpROC-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Test for Paired ROC Class — tpROC-class","text":"object class tpROC.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tpROC-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"Test for Paired ROC Class — tpROC-class","text":"testROC testROC refROC refROC method method stat stat","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tukey_outlier.html","id":null,"dir":"Reference","previous_headings":"","what":"Detect Tukey Outlier — tukey_outlier","title":"Detect Tukey Outlier — tukey_outlier","text":"Help function detects potential outlier Tukey method number Q1-1.5*IQR Q3+1.5*IQR.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tukey_outlier.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Detect Tukey Outlier — tukey_outlier","text":"","code":"tukey_outlier(x)"},{"path":"https://kaigu1990.github.io/mcradds/reference/tukey_outlier.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Detect Tukey Outlier — tukey_outlier","text":"x (numeric) numeric input","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tukey_outlier.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Detect Tukey Outlier — tukey_outlier","text":"list contains outliers vector without outliers.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tukey_outlier.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Detect Tukey Outlier — tukey_outlier","text":"","code":"x <- c(13.6, 44.4, 45.9, 14.9, 41.9, 53.3, 44.7, 95.2, 44.1, 50.7, 45.2, 60.1, 89.1) tukey_outlier(x) #> $ord #> [1] 1 4 8 13 #> #> $out #> [1] 13.6 14.9 95.2 89.1 #> #> $subset #> [1] 44.4 45.9 41.9 53.3 44.7 44.1 50.7 45.2 60.1 #>"},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"mcradds-101","dir":"Changelog","previous_headings":"","what":"mcradds 1.0.1","title":"mcradds 1.0.1","text":"CRAN release: 2023-10-11","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"meta-1-0-1","dir":"Changelog","previous_headings":"","what":"Meta","title":"mcradds 1.0.1","text":"Remove mcr package related codes ’s available CRAN.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"meta-1-0-0","dir":"Changelog","previous_headings":"","what":"Meta","title":"mcradds 1.0.0","text":"First public release mcradds package. Submission CRAN.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"new-features-1-0-0","dir":"Changelog","previous_headings":"","what":"New features","title":"mcradds 1.0.0","text":"Added autoplot method Bland-Altman regression plots.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"new-features-0-2-0","dir":"Changelog","previous_headings":"","what":"New features","title":"mcradds 0.2.0","text":"Added tukey_outlier dixon_outlier detect outliers ahead establishing reference range. Added robustRI nonparRI compute robust non-parametric reference range, integrated main program refInterval. Wrapped anovaVCA VCAinference VCA package analyze variance components ANOVA model. Added aucTest AUC test paired two-sample measurements designs difference, non-inferiority superiority. Added RefInt tpROC classes corresponding show method. Added calcium, glucose, ldlroc PDL1RP data sets example testing use, nonparRanks data set internal function use.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"enhancements-0-2-0","dir":"Changelog","previous_headings":"","what":"Enhancements","title":"mcradds 0.2.0","text":"Enhanced diagTab getAccuracy can support reader precision ananlysis qualitative performance.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"miscellaneous-0-2-0","dir":"Changelog","previous_headings":"","what":"Miscellaneous","title":"mcradds 0.2.0","text":"Added series helper function format concatenate string. Uniform capital lower-case letters roxygen documents.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"mcradds-010","dir":"Changelog","previous_headings":"","what":"mcradds 0.1.0","title":"mcradds 0.1.0","text":"First release mcradds package, contains basic quantitative qualitative performance methods functions shown .","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"sample-size-0-1-0","dir":"Changelog","previous_headings":"","what":"Sample Size","title":"mcradds 0.1.0","text":"Added size_one_prop size_ci_one_prop sample size qualitative trials, size_corr size_ci_corr quantitative trails.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"classes-and-datasets-0-1-0","dir":"Changelog","previous_headings":"","what":"Classes and Datasets","title":"mcradds 0.1.0","text":"Added SampleSize, MCTab BAsummary classes show method. Added platelet qualData data sets example testing use.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"analyzing-functions-and-methods-0-1-0","dir":"Changelog","previous_headings":"","what":"Analyzing Functions and Methods","title":"mcradds 0.1.0","text":"Added diagTab function get 2x2 contingency table, getAccuracy method compute qualitative diagnostic accuracy criteria. Added blandAltman function calculate statistics Bland-Altman, getOutlier method detect potential outliers. Added pearsonTest spearmanTest, efficient functions compute confidence interval hypothesis test. Added mcreg calcBias methods mcr package wrapped regression analysis.","code":""}] +[{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"Introduction to mcradds","text":"vignette shows general purpose usage mcradds R package. mcradds successor mcr R package developed Roche, therefore fundamental coding ideas method comparison regression borrowed . addition, supplement series useful functions methods based several reference documents CLSI NMPA guidance. can perform statistical analysis graphics different IVD trials utilizing analytical functions. However, unfortunately functions methods validated QC’ed, can guarantee entirely proper error-free. always strive compare results resources order obtain consistent . utilized past usual work process, believe quality package temporarily sufficient use. vignette going learn : Estimate sample size trials, following NMPA guideline. Evaluate diagnostic accuracy /without reference, following CLSI EP12-A2. Perform regression methods analysis plots, following CLSI EP09-A3. Perform bland-Altman analysis plots, following CLSI EP09-A3. Detect outliers 4E method CLSI EP09-A2 ESD CLSI EP09-A3. Estimate bias medical decision level, following CLSI EP09-A3. Perform Pearson Spearman correlation analysis adding hypothesis test confidence interval. Evaluate Reference Range/Interval, following CLSI EP28-A3 NMPA guideline. Add paired ROC/AUC test superiority non-inferiority trials, following CLSI EP05-A3/EP15-A3. Perform reproducibility analysis (reader precision) immunohistochemical assays, following CLSI /LA28-A2 NMPA guideline. Evaluate precision quantitative measurements, following CLSI EP05-A3. reference mcradds functions available mcradds website functions reference.","code":"browseVignettes(package = \"mcradds\")"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"common-ivd-trials-analyses","dir":"Articles","previous_headings":"","what":"Common IVD Trials Analyses","title":"Introduction to mcradds","text":"Every analysis purpose can achieved functions S4 methods mcradds package, present general usage . packages used vignette : data sets different purposes used vignette :","code":"library(mcradds) data(\"qualData\") data(\"platelet\") # data(creatinine, package = \"mcr\") data(\"calcium\") data(\"ldlroc\") data(\"PDL1RP\") data(\"glucose\")"},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"example-1-1","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Estimation of Sample Size","what":"Example 1.1","title":"Introduction to mcradds","text":"Suppose expected sensitivity criteria new assay 0.9, clinical acceptable criteria 0.85. conduct two-sided normal Z-test significance level α = 0.05 achieve power 80%, total sample 363.","code":"size_one_prop(p1 = 0.9, p0 = 0.85, alpha = 0.05, power = 0.8) #> #> Sample size determination for one Proportion #> #> Call: size_one_prop(p1 = 0.9, p0 = 0.85, alpha = 0.05, power = 0.8) #> #> optimal sample size: n = 363 #> #> p1:0.9 p0:0.85 alpha:0.05 power:0.8 alternative:two.sided"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"example-1-2","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Estimation of Sample Size","what":"Example 1.2","title":"Introduction to mcradds","text":"Suppose expected sensitivity criteria new assay 0.85, lower 95% confidence interval Wilson Score significance level α = 0.05 criteria 0.8, total sample 246. don’t want use CI Wilson Score just following NMPA’s suggestion appendix, CI Simple-asymptotic recommended 196 sample size, shown .","code":"size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wilson\") #> #> Sample size determination for a Given Lower Confidence Interval #> #> Call: size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wilson\") #> #> optimal sample size: n = 246 #> #> p:0.85 lr:0.8 alpha:0.05 interval:c(1, 1e+05) tol:1e-05 alternative:two.sided method:wilson size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"simple-asymptotic\") #> #> Sample size determination for a Given Lower Confidence Interval #> #> Call: size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"simple-asymptotic\") #> #> optimal sample size: n = 196 #> #> p:0.85 lr:0.8 alpha:0.05 interval:c(1, 1e+05) tol:1e-05 alternative:two.sided method:simple-asymptotic"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"example-1-3","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Estimation of Sample Size","what":"Example 1.3","title":"Introduction to mcradds","text":"Suppose expected correlation coefficient test reference assays 0.95, clinical acceptable criteria 0.9. conduct one-sided test significance level α = 0.025 achieve power 80%, total sample 64.","code":"size_corr(r1 = 0.95, r0 = 0.9, alpha = 0.025, power = 0.8, alternative = \"greater\") #> #> Sample size determination for testing Pearson's Correlation #> #> Call: size_corr(r1 = 0.95, r0 = 0.9, alpha = 0.025, power = 0.8, alternative = \"greater\") #> #> optimal sample size: n = 64 #> #> r1:0.95 r0:0.9 alpha:0.025 power:0.8 alternative:greater"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"example-1-4","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Estimation of Sample Size","what":"Example 1.4","title":"Introduction to mcradds","text":"Suppose expected correlation coefficient test reference assays 0.9, lower 95% confidence interval significance level α = 0.025 criteria greater 0.85, total sample 86.","code":"size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> Sample size determination for a Given Lower Confidence Interval of Pearson's Correlation #> #> Call: size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> optimal sample size: n = 86 #> #> r:0.9 lr:0.85 alpha:0.025 interval:c(10, 1e+05) tol:1e-05 alternative:greater"},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"create-2x2-contingency-table","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Evaluation of Diagnostic Accuracy","what":"Create 2x2 contingency table","title":"Introduction to mcradds","text":"Assume wide structure data like qualData contains measurements candidate comparative assays. scenario, ’d better define formula candidate assay first, followed comparative assay right formula, right ~. , add dimname argument indicate row column names 2x2 contingency table, define order levels prefer . Assume long structure data needs summarized, dummy data shown . formula define another format. left formula type assay, right measurement.","code":"head(qualData) #> Sample ComparativeN CandidateN #> 1 ID1 1 1 #> 2 ID2 1 0 #> 3 ID3 0 0 #> 4 ID4 1 0 #> 5 ID5 1 1 #> 6 ID6 1 1 tb <- qualData %>% diagTab( formula = ~ CandidateN + ComparativeN, levels = c(1, 0) ) tb #> Contingency Table: #> #> levels: 1 0 #> ComparativeN #> CandidateN 1 0 #> 1 122 8 #> 0 16 54 dummy <- data.frame( id = c(\"1001\", \"1001\", \"1002\", \"1002\", \"1003\", \"1003\"), value = c(1, 0, 0, 0, 1, 1), type = c(\"Test\", \"Ref\", \"Test\", \"Ref\", \"Test\", \"Ref\") ) %>% diagTab( formula = type ~ value, bysort = \"id\", dimname = c(\"Test\", \"Ref\"), levels = c(1, 0) ) dummy #> Contingency Table: #> #> levels: 1 0 #> Ref #> Test 1 0 #> 1 1 1 #> 0 0 1"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"with-referencegold-standard","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Evaluation of Diagnostic Accuracy","what":"With Reference/Gold Standard","title":"Introduction to mcradds","text":"Next step utilize getAccuracy method calculate diagnostic accuracy. reference assay gold standard, argument ref r means ‘reference’. output present several indicators, sensitivity (sens), specificity (spec), positive/negative predictive value (ppv/npv) positive/negative likelihood ratio (plr/nlr). details can found ?getAccuracy.","code":"# Default method is Wilson score, and digit is 4. tb %>% getAccuracy(ref = \"r\") #> EST LowerCI UpperCI #> sens 0.8841 0.8200 0.9274 #> spec 0.8710 0.7655 0.9331 #> ppv 0.9385 0.8833 0.9685 #> npv 0.7714 0.6605 0.8541 #> plr 6.8514 3.5785 13.1181 #> nlr 0.1331 0.0832 0.2131 # Alter the number of digit to 2. tb %>% getAccuracy(ref = \"r\", digit = 2) #> EST LowerCI UpperCI #> sens 0.88 0.82 0.93 #> spec 0.87 0.77 0.93 #> ppv 0.94 0.88 0.97 #> npv 0.77 0.66 0.85 #> plr 6.85 3.58 13.12 #> nlr 0.13 0.08 0.21 # Alter the number of digit to 2. tb %>% getAccuracy(ref = \"r\", r_ci = \"clopper-pearson\") #> EST LowerCI UpperCI #> sens 0.8841 0.8186 0.9323 #> spec 0.8710 0.7615 0.9426 #> ppv 0.9385 0.8823 0.9731 #> npv 0.7714 0.6555 0.8633 #> plr 6.8514 3.5785 13.1181 #> nlr 0.1331 0.0832 0.2131"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"without-referencegold-standard","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Evaluation of Diagnostic Accuracy","what":"Without Reference/Gold Standard","title":"Introduction to mcradds","text":"reference assay gold standard, example, comparative assay approved market sale, ref nr means ‘reference’. output present indicators, positive/negative percent agreement (ppa/npa) overall percent agreement (opa).","code":"# When the reference is a comparative assay, not gold standard. tb %>% getAccuracy(ref = \"nr\", nr_ci = \"wilson\") #> EST LowerCI UpperCI #> ppa 0.8841 0.8200 0.9274 #> npa 0.8710 0.7655 0.9331 #> opa 0.8800 0.8277 0.9180"},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"estimating-regression-coefficient","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Regression coefficient and bias in medical decision level","what":"Estimating Regression coefficient","title":"Introduction to mcradds","text":"Regression agreement important criteria method comparison trials can achieved mcr package provided series regression methods, ‘Deming’, ‘Passing-Bablok’,’ weighted Deming’ . main key functions wrapped mcradds, mcreg, getCoefficients calcBias. like utilize entire functions mcr package, just adding specific package name front , like mcr::calcBias(), looks function called mcr package. Please noted mcr package available CRAN, mcreg mcreg2 function can used temporarily.","code":"# Deming regression fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, error.ratio = 1, method.reg = \"Deming\", method.ci = \"jackknife\" ) printSummary(fit) getCoefficients(fit)"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"estimating-bias-in-medical-decision-level","dir":"Articles","previous_headings":"Common IVD Trials Analyses > Regression coefficient and bias in medical decision level","what":"Estimating Bias in Medical Decision Level","title":"Introduction to mcradds","text":"obtained regression equation, whether ‘Deming’ ‘Passing-Bablok’, can use estimate bias medical decision level. Suppose know medical decision level one assay 30, obviously make-number. can use fit object estimate bias using calcBias function. Please noted mcr package available CRAN, calcBias function can used temporarily.","code":"# absolute bias. calcBias(fit, x.levels = c(30)) # proportional bias. calcBias(fit, x.levels = c(30), type = \"proportional\")"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"bland-altman-analysis","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Bland-Altman Analysis","title":"Introduction to mcradds","text":"Bland-Altman analysis also agreement criteria method comparison trials. term authority’s request, normally present two categories: absolute difference relative difference, order evaluate agreements aspects. outputs descriptive statistics, including ‘mean’, ‘median’, ‘Q1’, ‘Q3’, ‘min’, ‘max’, ‘CI’ (confidence interval mean) ‘LoA’ (Limit Agreement). Please make sure difference type calculation, answer question define absolute relative difference. details information can found ?h_difference, five types available option. Default absolute difference derived Y-X, relative difference (Y-X)/(0.5*(X+Y)). Sometime think reference (X) gold standard good agreement test (Y), relative difference type can type2 = 4.","code":"# Default difference type blandAltman( x = platelet$Comparative, y = platelet$Candidate, type1 = 3, type2 = 5 ) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate, #> type1 = 3, type2 = 5) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/(0.5*(X+Y)) #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.064 ( 0.145) #> Median 6.350 0.055 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.118) #> Min, Max (-47.800, 42.100) (-0.412, 0.667) #> Limit of Agreement (-24.011, 38.671) (-0.220, 0.347) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.038, 0.089) # Change relative different type to 4. blandAltman( x = platelet$Comparative, y = platelet$Candidate, type1 = 3, type2 = 4 ) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate, #> type1 = 3, type2 = 4) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/X #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.078 ( 0.173) #> Median 6.350 0.056 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.125) #> Min, Max (-47.800, 42.100) (-0.341, 1.000) #> Limit of Agreement (-24.011, 38.671) (-0.261, 0.417) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.047, 0.109)"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"detecting-outliers","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Detecting Outliers","title":"Introduction to mcradds","text":"know, numerous statistical methodologies detect outliers. try show methods commonly used IVD trials different purposes. First foremost, quantitative data generate outliers, detecting process occurred quantitative trials. method comparison trials, detected outliers used sensitive analysis common. example, detect 5 outliers 200 subjects trial, conduct sensitive analysis without outliers interpret difference scenarios. two CLSI’s recommended approaches,4E ESD, wit latter one recommended recent version. mcradds package, can utilize getOutlier method detect outliers method argument define method ’d like, difference arguments difference type like ‘absolute’ ‘relative’ used. addition, mcradds also provides outlier methods evaluating Reference Range, ‘Tukey’ ‘Dixon’ wrapped refInterval() function.","code":"# ESD approach ba <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) out <- getOutlier(ba, method = \"ESD\", difference = \"rel\") out$stat #> i Mean SD x Obs ESDi Lambda Outlier #> 1 1 0.06356753 0.1447540 0.6666667 1 4.166372 3.445148 TRUE #> 2 2 0.05849947 0.1342496 0.5783972 4 3.872621 3.442394 TRUE #> 3 3 0.05409356 0.1258857 0.5321101 2 3.797226 3.439611 TRUE #> 4 4 0.05000794 0.1183096 -0.4117647 10 3.903086 3.436800 TRUE #> 5 5 0.05398874 0.1106738 -0.3132530 14 3.318236 3.433961 FALSE #> 6 6 0.05718215 0.1056542 -0.2566372 23 2.970250 3.431092 FALSE out$outmat #> sid x y #> 1 1 1.5 3.0 #> 2 2 4.0 6.9 #> 3 4 10.2 18.5 #> 4 10 16.4 10.8 # 4E approach ba2 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) out2 <- getOutlier(ba2, method = \"4E\") #> No outlier is detected. out2$stat #> NULL out2$outmat #> NULL"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"hypothesis-of-pearson-and-spearman","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Hypothesis of Pearson and Spearman","title":"Introduction to mcradds","text":"correlation coefficient Pearson helpful criteria assessing agreement test reference assays. compute coefficient P value R, cor.test() function commonly used. However P value relies hypothesis H0=0, doesn’t meet requirement authority. required provide P value H0=0.7 sometimes. Thus case, suggest use pearsonTest() function instead, hypothesis based Fisher’s Z transformation correlation. Since cor.test() function can provide confidence interval special hypothesis Spearman, spearmanTest() function recommended. function computes CI using bootstrap method, hypothesis based Fisher’s Z transformation correlation, variance proposed Bonett Wright (2000), Pearson’s.","code":"x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) pearsonTest(x, y, h0 = 0.5, alternative = \"greater\") #> $stat #> cor lowerci upperci Z pval #> 0.5711816 -0.1497426 0.8955795 0.2448722 0.4032777 #> #> $method #> [1] \"Pearson's correlation\" #> #> $conf.level #> [1] 0.95 x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) spearmanTest(x, y, h0 = 0.5, alternative = \"greater\") #> $stat #> cor lowerci upperci Z pval #> 0.6000000 -0.1583052 0.9800536 0.3243526 0.3728355 #> #> $method #> [1] \"Spearman's correlation\" #> #> $conf.level #> [1] 0.95"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"establishing-reference-rangeinterval","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Establishing Reference Range/Interval","title":"Introduction to mcradds","text":"refInterval function provides two outlier methods Tukey Dixon, three methods mentioned CLSI establish reference interval (RI). first parametric method follows normal distribution compute confidence interval. second one nonparametric method computes 2.5th 97.5th percentile range reference interval 95%. third one robust method, slightly complicated involves iterative procedure based formulas EP28A3. observations weighted according distance central tendency initially estimated median MAD(median absolute deviation). first two methods also accepted NMPA guideline, robust method recommended NMPA want establish reference interval assay, must collect least 120 samples China. number less 120, can ensure accuracy results. CLSI working group hesitant recommend method well, except extreme instances. default, confidence interval (CI) presented depending RI method utilized. RI method parametric, CI method parametric well. RI method nonparametric sample size 120 observations, nonparametric CI suggested. Otherwise sample size 120, boot method CI better choice. need aware nonparametric method CI allows refLevel = 0.95 confLevel = 0.9 arguments, boot methods CI used automatically. RI method robust method, method CI must boot. like compute 90% reference interval rather 90%, just alter refLevel = 0.9. confidence interval similar altered confLevel = 0.95 like compute 95% confidence interval limit reference interval.","code":"refInterval(x = calcium$Value, RI_method = \"parametric\", CI_method = \"parametric\") #> #> Reference Interval Method: parametric, Confidence Interval Method: parametric #> #> Call: refInterval(x = calcium$Value, RI_method = \"parametric\", CI_method = \"parametric\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.05, 10.32 #> RefLower Confidence Interval: 8.9926, 9.1100 #> Refupper Confidence Interval: 10.2584, 10.3757 refInterval(x = calcium$Value, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> Reference Interval Method: nonparametric, Confidence Interval Method: nonparametric #> #> Call: refInterval(x = calcium$Value, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.10, 10.30 #> RefLower Confidence Interval: 8.9000, 9.2000 #> Refupper Confidence Interval: 10.3000, 10.4000 refInterval(x = calcium$Value, RI_method = \"robust\", CI_method = \"boot\") #> [1] \"Bootstrape process could take a short while.\" #> #> Reference Interval Method: robust, Confidence Interval Method: boot #> #> Call: refInterval(x = calcium$Value, RI_method = \"robust\", CI_method = \"boot\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.04, 10.32 #> RefLower Confidence Interval: 8.9801, 9.0969 #> Refupper Confidence Interval: 10.2576, 10.3760"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"paired-auc-test","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Paired AUC Test","title":"Introduction to mcradds","text":"aucTest function compares two AUC paired two-sample diagnostic assays using standardized difference method, small difference SE computation compared unpaired design. samples paired design considered independent, SE can computed directly Delong’s method pROC package. order evaluate two paired assays, aucTest function three assessment methods including ‘difference’, ‘non-inferiority’ ‘superiority’, shown Liu(2006)’s article . Jen-Pei Liu (2006) “Tests equivalence non-inferiority diagnostic accuracy based paired areas ROC curves”. Statist. Med., 25:1219–1238. DOI: 10.1002/sim.2358. Suppose want compare paired AUC OxLDL LDL assays ldlroc data set, null hypothesis difference AUC area. Suppose want see OxLDL assay superior LDL assay margin equal 0.1. case null hypothesis difference less 0.1. Suppose want see OxLDL assay non-inferior LDL assay margin equal -0.1. case null hypothesis difference less -0.1.","code":"# H0 : Difference between areas = 0: aucTest(x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing difference based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is difference to 0 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 3.0088 #> Pvalue: 0.002623 # H0 : Superiority margin <= 0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = \"superiority\", h0 = 0.1 ) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing superiority based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is superiority to 0.1 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 1.7436 #> Pvalue: 0.04061 # H0 : Non-inferiority margin <= -0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = \"non-inferiority\", h0 = -0.1 ) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing non-inferiority based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is non-inferiority to -0.1 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 4.2739 #> Pvalue: 9.606e-06"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"reproducibility-analysis-reader-precision","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Reproducibility Analysis (Reader Precision)","title":"Introduction to mcradds","text":"PDL1 assay trials, must estimate reader precision different readers reads sites, using APA, ANA OPA primary endpoint. getAccuracy function can implement computations reader precision trials belong qualitative trials. distinction trial, comparative assay, just stained specimen scored different pathologists (readers). can determine one can reference, instead compare comparison. PDL1RP example data, 150 specimens stained one PD-L1 assay three different sites, 50 specimens . PDL1RP$wtn_reader sub-data, 3 readers selected three different sites responsible scoring 50 specimens . Thus might want evaluate reproducibility within three readers three site. PDL1RP$wtn_reader sub-data, one reader selected three different sites responsible scoring 50 specimens 3 times minimum 2 weeks reads means process score. Thus might want evaluate reproducibility within three reads specimens. PDL1RP$btw_site sub-data, one reader selected three different sites responsible scoring 150 specimens , collected three sites. Thus might want evaluate reproducibility within three site.","code":"reader <- PDL1RP$btw_reader tb1 <- reader %>% diagTab( formula = Reader ~ Value, bysort = \"Sample\", levels = c(\"Positive\", \"Negative\"), rep = TRUE, across = \"Site\" ) getAccuracy(tb1, ref = \"bnr\", rng.seed = 12306) #> EST LowerCI UpperCI #> apa 0.9479 0.9260 0.9686 #> ana 0.9540 0.9342 0.9730 #> opa 0.9511 0.9311 0.9711 read <- PDL1RP$wtn_reader tb2 <- read %>% diagTab( formula = Order ~ Value, bysort = \"Sample\", levels = c(\"Positive\", \"Negative\"), rep = TRUE, across = \"Sample\" ) getAccuracy(tb2, ref = \"bnr\", rng.seed = 12306) #> EST LowerCI UpperCI #> apa 0.9442 0.9204 0.9657 #> ana 0.9489 0.9273 0.9681 #> opa 0.9467 0.9244 0.9667 site <- PDL1RP$btw_site tb3 <- site %>% diagTab( formula = Site ~ Value, bysort = \"Sample\", levels = c(\"Positive\", \"Negative\"), rep = TRUE, across = \"Sample\" ) getAccuracy(tb2, ref = \"bnr\", rng.seed = 12306) #> EST LowerCI UpperCI #> apa 0.9442 0.9204 0.9657 #> ana 0.9489 0.9273 0.9681 #> opa 0.9467 0.9244 0.9667"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"precision-evaluation","dir":"Articles","previous_headings":"Common IVD Trials Analyses","what":"Precision Evaluation","title":"Introduction to mcradds","text":"precision evaluation commonly used IVD trials, necessary include process end-users laboratories’ QC procedure verifying repeatability within-laboratory precision. wrapped main key functions Roche’s VCA, well mcr package. ’s recommended read details ?anovaVCA ?VCAinference functions CLSI-EP05 help understanding outputs, CV%.","code":"fit <- anovaVCA(value ~ day / run, glucose) VCAinference(fit) #> #> #> #> Inference from (V)ariance (C)omponent (A)nalysis #> ------------------------------------------------ #> #> > VCA Result: #> ------------- #> #> Name DF SS MS VC %Total SD CV[%] #> 1 total 64.7773 12.9336 100 3.5963 1.4727 #> 2 day 19 415.8 21.8842 1.9586 15.1432 1.3995 0.5731 #> 3 day:run 20 281 14.05 3.075 23.7754 1.7536 0.7181 #> 4 error 40 316 7.9 7.9 61.0814 2.8107 1.151 #> #> Mean: 244.2 (N = 80) #> #> Experimental Design: balanced | Method: ANOVA #> #> #> > VC: #> ----- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 12.9336 9.4224 18.8614 9.9071 17.7278 #> day 1.9586 #> day:run 3.0750 #> error 7.9000 5.3251 12.9333 5.6673 11.9203 #> #> > SD: #> ----- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 3.5963 3.0696 4.3430 3.1476 4.2104 #> day 1.3995 #> day:run 1.7536 #> error 2.8107 2.3076 3.5963 2.3806 3.4526 #> #> > CV[%]: #> -------- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 1.4727 1.257 1.7785 1.2889 1.7242 #> day 0.5731 #> day:run 0.7181 #> error 1.1510 0.945 1.4727 0.9749 1.4138 #> #> #> 95% Confidence Level #> SAS PROC MIXED method used for computing CIs"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"common-visualizations","dir":"Articles","previous_headings":"","what":"Common Visualizations","title":"Introduction to mcradds","text":"term visualizations IVD trials, two common plots presented clinical reports, Bland-Altman plot Regression plot. don’t use two different functions draw plots, included autoplot() function. plots can obtained just call autoplot() object.","code":""},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"bland-altman-plot","dir":"Articles","previous_headings":"Common Visualizations","what":"Bland-Altman plot","title":"Introduction to mcradds","text":"generate Bland-Altman plot, create object blandAltman() function call autoplot straightforward can choose Bland-Altman type require, ‘absolute’ ‘relative’. Add drawing arguments like adjust format. detailed arguments can found ?autoplot.","code":"object <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) # Absolute difference plot autoplot(object, type = \"absolute\") # Relative difference plot autoplot(object, type = \"relative\") autoplot( object, type = \"absolute\", jitter = TRUE, fill = \"lightblue\", color = \"grey\", size = 2, ref.line.params = list(col = \"grey\"), loa.line.params = list(col = \"grey\"), label.digits = 2, label.params = list(col = \"grey\", size = 3, fontface = \"italic\"), x.nbreak = 6, main.title = \"Bland-Altman Plot\", x.title = \"Mean of Test and Reference Methods\", y.title = \"Reference - Test\" )"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"regression-plot","dir":"Articles","previous_headings":"Common Visualizations","what":"Regression plot","title":"Introduction to mcradds","text":"generate regression plot, create object mcreg() function call autoplot straightforward. Please noted mcr package available CRAN, mcreg mcreg2 function can used temporarily. arguments can used shown .","code":"fit <- mcreg2( x = platelet$Comparative, y = platelet$Candidate, method.reg = \"PaBa\", method.ci = \"bootstrap\" ) autoplot(fit) autoplot( fit, identity.params = list(col = \"blue\", linetype = \"solid\"), reg.params = list(col = \"red\", linetype = \"solid\"), equal.axis = TRUE, legend.title = FALSE, legend.digits = 3, x.title = \"Reference\", y.title = \"Test\" )"},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"summary","dir":"Articles","previous_headings":"","what":"Summary","title":"Introduction to mcradds","text":"summary, mcradds contains multiple functions methods internal statistical analyses QC procedure IVD trials. design package aims expand analysis scope mcr package , give users lot flexibility meeting analysis needs. Given package validated GCP process, ’s recommended use regulatory submissions. However can give assist supplementary analysis needs regulatory.","code":""},{"path":"https://kaigu1990.github.io/mcradds/articles/mcradds.html","id":"session-info","dir":"Articles","previous_headings":"","what":"Session Info","title":"Introduction to mcradds","text":"output sessionInfo() system.","code":"#> R version 4.3.1 (2023-06-16) #> Platform: x86_64-pc-linux-gnu (64-bit) #> Running under: Ubuntu 22.04.3 LTS #> #> Matrix products: default #> BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 #> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so; LAPACK version 3.10.0 #> #> locale: #> [1] LC_CTYPE=C.UTF-8 LC_NUMERIC=C LC_TIME=C.UTF-8 #> [4] LC_COLLATE=C.UTF-8 LC_MONETARY=C.UTF-8 LC_MESSAGES=C.UTF-8 #> [7] LC_PAPER=C.UTF-8 LC_NAME=C LC_ADDRESS=C #> [10] LC_TELEPHONE=C LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C #> #> time zone: UTC #> tzcode source: system (glibc) #> #> attached base packages: #> [1] stats graphics grDevices datasets utils methods base #> #> other attached packages: #> [1] mcradds_1.0.1 #> #> loaded via a namespace (and not attached): #> [1] gld_2.6.6 gtable_0.3.4 xfun_0.40 #> [4] bslib_0.5.1 ggplot2_3.4.3 lattice_0.21-8 #> [7] numDeriv_2016.8-1.1 vctrs_0.6.3 tools_4.3.1 #> [10] generics_0.1.3 tibble_3.2.1 proxy_0.4-27 #> [13] fansi_1.0.5 pkgconfig_2.0.3 Matrix_1.5-4.1 #> [16] data.table_1.14.8 checkmate_2.2.0 desc_1.4.2 #> [19] readxl_1.4.3 lifecycle_1.0.3 rootSolve_1.8.2.4 #> [22] farver_2.1.1 compiler_4.3.1 stringr_1.5.0 #> [25] textshaping_0.3.7 Exact_3.2 munsell_0.5.0 #> [28] htmltools_0.5.6.1 DescTools_0.99.50 class_7.3-22 #> [31] sass_0.4.7 yaml_2.3.7 nloptr_2.0.3 #> [34] pillar_1.9.0 pkgdown_2.0.7 jquerylib_0.1.4 #> [37] MASS_7.3-60 cachem_1.0.8 boot_1.3-28.1 #> [40] nlme_3.1-162 tidyselect_1.2.0 digest_0.6.33 #> [43] mvtnorm_1.2-3 stringi_1.7.12 dplyr_1.1.3 #> [46] purrr_1.0.2 labeling_0.4.3 splines_4.3.1 #> [49] rprojroot_2.0.3 fastmap_1.1.1 grid_4.3.1 #> [52] colorspace_2.1-0 lmom_3.0 expm_0.999-7 #> [55] cli_3.6.1 magrittr_2.0.3 utf8_1.2.3 #> [58] VCA_1.4.5 e1071_1.7-13 withr_2.5.1 #> [61] scales_1.2.1 backports_1.4.1 rmarkdown_2.25 #> [64] httr_1.4.7 lme4_1.1-34 cellranger_1.1.0 #> [67] ragg_1.2.6 memoise_2.0.1 evaluate_0.22 #> [70] knitr_1.44 rlang_1.1.1 Rcpp_1.0.11 #> [73] glue_1.6.2 renv_0.15.5 pROC_1.18.4 #> [76] minqa_1.2.6 rstudioapi_0.15.0 jsonlite_1.8.7 #> [79] plyr_1.8.9 R6_2.5.1 systemfonts_1.0.5 #> [82] fs_1.6.3"},{"path":"https://kaigu1990.github.io/mcradds/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Kai Gu. Author, maintainer, copyright holder.","code":""},{"path":"https://kaigu1990.github.io/mcradds/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Gu K (2023). mcradds: Processing Analyzing IVD Trials. https://github.com/kaigu1990/mcradds, https://kaigu1990.github.io/mcradds/.","code":"@Manual{, title = {mcradds: Processing and Analyzing of IVD Trials}, author = {Kai Gu}, year = {2023}, note = {https://github.com/kaigu1990/mcradds, https://kaigu1990.github.io/mcradds/}, }"},{"path":"https://kaigu1990.github.io/mcradds/index.html","id":"mcradds-","dir":"","previous_headings":"","what":"Processing and Analyzing of IVD Trials","title":"Processing and Analyzing of IVD Trials","text":"mcradds R package complement mcr package, contains common solid functions designing, analyzing visualization Vitro Diagnostic trials. methods algorithms refer CLSI recommendations NMPA guidelines. package provides series typical functionality, shown : Estimation sample size trials, NMPA guideline. Diagnostic accuracy /without standard/golden reference, CLSI EP12-A2. Regression analysis plot method comparison, CLSI EP09-A3. Bland-Altman analysis plot method comparison, CLSI EP09-A3. Outlier detection 4E method CLSI EP09-A2 ESD CLSI EP09-A3. Evaluation bias medical decision level, CLSI EP09-A3. Pearson Spearman correlation adding hypothesis test confidence interval. Establishing Reference Range/Interval, CLSI EP28-A3 NMPA guideline. Paired ROC/AUC test superiority non-inferiority trials, CLSI EP05-A3/EP15-A3. Reproducibility analysis (reader precision) immunohistochemical assays, CLSI /LA28-A2 NMPA guideline. Evaluation precision quantitative measurements, CLSI EP05-A3.","code":""},{"path":"https://kaigu1990.github.io/mcradds/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Processing and Analyzing of IVD Trials","text":"mcradds available CRAN can install latest released version : can install development version directly GitHub : See package vignettes browseVignettes(package = \"mcradds\") usage package.","code":"install.packages(\"mcradds\") if (!require(\"devtools\")) { install.packages(\"devtools\") } devtools::install_github(\"kaigu1990/mcradds\")"},{"path":"https://kaigu1990.github.io/mcradds/reference/BAsummary-class.html","id":null,"dir":"Reference","previous_headings":"","what":"BAsummary Class — BAsummary-class","title":"BAsummary Class — BAsummary-class","text":"BAsummary class used display BlandAltman analysis outliers.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/BAsummary-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"BAsummary Class — BAsummary-class","text":"","code":"BAsummary(call, data, stat, param)"},{"path":"https://kaigu1990.github.io/mcradds/reference/BAsummary-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"BAsummary Class — BAsummary-class","text":"call (call) function call. data (data.frame) stores raw data input. stat (list) contains several statistics numeric data. param (list) list relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/BAsummary-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"BAsummary Class — BAsummary-class","text":"object class BAsummary.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/BAsummary-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"BAsummary Class — BAsummary-class","text":"call call data data outlier outlier param param","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":null,"dir":"Reference","previous_headings":"","what":"EDS Test for Outliers — ESD_test","title":"EDS Test for Outliers — ESD_test","text":"Perform Rosner's generalized extreme Studentized deviate (ESD) test, assumes distribution normal (Gaussian), can used number outliers unknown, becomes robust number samples increases.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"EDS Test for Outliers — ESD_test","text":"","code":"ESD_test(x, alpha = 0.05, h = 5)"},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"EDS Test for Outliers — ESD_test","text":"x (numeric) vector observations can difference Bland-Altman analysis. Normally relative difference preferred IVD trials. Missing(NA) allowed removed. must least 10 available observations x. alpha (numeric) type--risk, \\(\\alpha\\). h (integer) positive integer indicating number suspected outliers. argument h must 1 n-2 n denotes number available values x. default value h = 5.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"EDS Test for Outliers — ESD_test","text":"list class containing results ESD test. stat data frame contains several statistics ESD test includes index(), Mean, SD, raw data(x), location(Obs) x, ESD statistics(ESDi), Lambda Outliers(TRUE FALSE). ord vector order index outliers equal Obs stat data frame.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"EDS Test for Outliers — ESD_test","text":"algorithm determining number outliers follows: Compare ESDi Lambda. ESDi > Lambda observations regards outliers. order index corresponds available x data removed missing (NA) value. compare ESD(h) ESD(h+1) equal, h+1 ESD values shown. identical, can regarded outliers.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"EDS Test for Outliers — ESD_test","text":"CLSI EP09A3 Appendix B. Detecting Aberrant Results (Outliers).","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ESD_test.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"EDS Test for Outliers — ESD_test","text":"","code":"data(\"platelet\") res <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) ESD_test(x = res@stat$relative_diff) #> $stat #> i Mean SD x Obs ESDi Lambda Outlier #> 1 1 0.06356753 0.1447540 0.6666667 1 4.166372 3.445148 TRUE #> 2 2 0.05849947 0.1342496 0.5783972 4 3.872621 3.442394 TRUE #> 3 3 0.05409356 0.1258857 0.5321101 2 3.797226 3.439611 TRUE #> 4 4 0.05000794 0.1183096 -0.4117647 10 3.903086 3.436800 TRUE #> 5 5 0.05398874 0.1106738 -0.3132530 14 3.318236 3.433961 FALSE #> 6 6 0.05718215 0.1056542 -0.2566372 23 2.970250 3.431092 FALSE #> #> $ord #> [1] 1 4 2 10 #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/MCR-class.html","id":null,"dir":"Reference","previous_headings":"","what":"Method Comparison Regression Class — MCR-class","title":"Method Comparison Regression Class — MCR-class","text":"MCR class serves simplified version MCResult mcr package. mcr package available CRAN, class took temporary replacement , contains necessaries autoplot.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCR-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Method Comparison Regression Class — MCR-class","text":"","code":"MCR(data, coef, mnames, regmeth)"},{"path":"https://kaigu1990.github.io/mcradds/reference/MCR-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Method Comparison Regression Class — MCR-class","text":"data (data.frame) original data. coef (numeric) numeric vector contains slope intercept. mnames (character) name X Y assays, default 'Method1' 'Method2' defined mcreg function. regmeth (character) name regression.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCR-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Method Comparison Regression Class — MCR-class","text":"object class MCR.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCR-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"Method Comparison Regression Class — MCR-class","text":"data data coef coef mnames mnames regmeth regmeth","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCTab-class.html","id":null,"dir":"Reference","previous_headings":"","what":"MCTab Class — MCTab-class","title":"MCTab Class — MCTab-class","text":"MCTab class serves store 2x2 contingency table","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCTab-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"MCTab Class — MCTab-class","text":"","code":"MCTab(data, tab, levels)"},{"path":"https://kaigu1990.github.io/mcradds/reference/MCTab-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"MCTab Class — MCTab-class","text":"data (data.frame) original data set. tab (table)table class converted table() display 2x2 contingency table. levels (character) levels measurements.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCTab-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"MCTab Class — MCTab-class","text":"object class MCTab.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/MCTab-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"MCTab Class — MCTab-class","text":"data data tab candidate levels levels","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/PDL1RP.html","id":null,"dir":"Reference","previous_headings":"","what":"PD-L1 Reader Precision Data — PDL1RP","title":"PD-L1 Reader Precision Data — PDL1RP","text":"dummy data set PD-L1 stained study estimate reproducibility one assay determining PD-L1 status NSCLC tissue specimens. contains three sub-data compute reproducibility within reader (one pathologists, also called reader , scores one specimen three times), reader (three readers scores specimen) site (one reader three sites scores specimens). data sets reference score can used pairwise comparison calculate APA, ANA OPA reply reference.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/PDL1RP.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"PD-L1 Reader Precision Data — PDL1RP","text":"","code":"PDL1RP"},{"path":"https://kaigu1990.github.io/mcradds/reference/PDL1RP.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"PD-L1 Reader Precision Data — PDL1RP","text":"PDL1RP data set contains 3 sub set, sub set includes 150 specimens, 450 observations 4 variables. Sample Sample id Site Site id Order Order reader scoring Reader Reader id, first character represents site id, second character reader number Value Result scoring, Positive Negative","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/RefInt-class.html","id":null,"dir":"Reference","previous_headings":"","what":"Reference Interval Class — RefInt-class","title":"Reference Interval Class — RefInt-class","text":"RefInt class serves store results reference Interval calculation.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/RefInt-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Reference Interval Class — RefInt-class","text":"","code":"RefInt(call, method, n, data, outlier, refInt, confInt)"},{"path":"https://kaigu1990.github.io/mcradds/reference/RefInt-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Reference Interval Class — RefInt-class","text":"call (call) function call. method (character) method names reference interval confidence interval. n (numeric) number available samples. data (numeric) numeric raw measurements, outlier removed. outlier (list) list outliers contains index number outliers, data without outliers. refInt (numeric) number reference interval. confInt (list) list confidence interval lower upper reference limit.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/RefInt-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Reference Interval Class — RefInt-class","text":"object class RefInt.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/RefInt-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"Reference Interval Class — RefInt-class","text":"call call method method n n data data outlier outlier refInt refInt confInt confInt","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/VCAinference.html","id":null,"dir":"Reference","previous_headings":"","what":"Inferential Statistics for VCA-Results — VCAinference","title":"Inferential Statistics for VCA-Results — VCAinference","text":"copy VCA::VCAinference VCA package","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/VCAinference.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Inferential Statistics for VCA-Results — VCAinference","text":"","code":"VCAinference(...)"},{"path":"https://kaigu1990.github.io/mcradds/reference/VCAinference.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Inferential Statistics for VCA-Results — VCAinference","text":"... Arguments passed VCA::VCAinference obj (object) class 'VCA' , alternatively, list 'VCA' objects, argument can specified vectors, -th vector element applies -th element 'obj' (see examples) alpha (numeric) value specifying significance level \\(100*(1-alpha)\\)% confidence intervals. total.claim (numeric) value specifying claim-value Chi-Squared test total variance (SD CV, see claim.type). error.claim (numeric) value specifying claim-value Chi-Squared test error variance (SD CV, see claim.type). claim.type (character) one \"VC\", \"SD\", \"CV\" specifying claim-values interpreted: \"VC\" (Default) = claim-value(s) specified terms variance(s), \"SD\" = claim-values specified terms standard deviations (SD), \"CV\" = claim-values specified terms coefficient(s) variation (CV) specified percentages. set \"SD\" \"CV\", claim-values converted variances applying Chi-Squared test (see examples). VarVC (logical) TRUE = element \"Matrices\" exists (see anovaVCA), covariance matrix estimated VCs computed (see vcovVC, used CIs intermediate VCs 'method.ci=\"sas\"'. Note, might take long larger datasets, since many matrix operations involved. FALSE (Default) = computing covariance matrix VCs omitted, well CIs intermediate VCs. excludeNeg (logical) TRUE = confidence intervals negative variance estimates reported. FALSE = confidence intervals VCs reported including negative VCs. See details section thorough explanation. constrainCI (logical) TRUE = CI-limits variance components constrained >= 0. FALSE = unconstrained CIs potentially negative CI-limits reported. preserve original width CIs. See details section thorough explanation. ci.method (character) string abbreviation specifying approach use computing confidence intervals variance components (VC). \"sas\" (default) uses Chi-Squared based CIs total error normal approximation VCs (Wald-limits, option \"NOBOUND\" SAS PROC MIXED); \"satterthwaite\" approximate DFs VC using Satterthwaite approach (see SattDF models fitted ANOVA) Cis based Chi-Squared distribution. approach conservative avoids negative values lower bounds. quiet (logical) TRUE = suppress warning, issued otherwise","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/VCAinference.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Inferential Statistics for VCA-Results — VCAinference","text":"object VCAinference contains series statistics.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/VCAinference.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Inferential Statistics for VCA-Results — VCAinference","text":"","code":"data(glucose) fit <- anovaVCA(value ~ day / run, glucose) VCAinference(fit) #> #> #> #> Inference from (V)ariance (C)omponent (A)nalysis #> ------------------------------------------------ #> #> > VCA Result: #> ------------- #> #> Name DF SS MS VC %Total SD CV[%] #> 1 total 64.7773 12.9336 100 3.5963 1.4727 #> 2 day 19 415.8 21.8842 1.9586 15.1432 1.3995 0.5731 #> 3 day:run 20 281 14.05 3.075 23.7754 1.7536 0.7181 #> 4 error 40 316 7.9 7.9 61.0814 2.8107 1.151 #> #> Mean: 244.2 (N = 80) #> #> Experimental Design: balanced | Method: ANOVA #> #> #> > VC: #> ----- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 12.9336 9.4224 18.8614 9.9071 17.7278 #> day 1.9586 #> day:run 3.0750 #> error 7.9000 5.3251 12.9333 5.6673 11.9203 #> #> > SD: #> ----- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 3.5963 3.0696 4.3430 3.1476 4.2104 #> day 1.3995 #> day:run 1.7536 #> error 2.8107 2.3076 3.5963 2.3806 3.4526 #> #> > CV[%]: #> -------- #> Estimate CI LCL CI UCL One-Sided LCL One-Sided UCL #> total 1.4727 1.257 1.7785 1.2889 1.7242 #> day 0.5731 #> day:run 0.7181 #> error 1.1510 0.945 1.4727 0.9749 1.4138 #> #> #> 95% Confidence Level #> SAS PROC MIXED method used for computing CIs #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/anovaVCA.html","id":null,"dir":"Reference","previous_headings":"","what":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","title":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","text":"copy VCA::anovaVCA VCA package","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/anovaVCA.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","text":"","code":"anovaVCA(...)"},{"path":"https://kaigu1990.github.io/mcradds/reference/anovaVCA.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","text":"... Arguments passed VCA::anovaVCA form (formula) specifying model fit, response variable left '~' mandatory Data (data.frame) containing variables referenced 'form' (factor, character) variable specifying groups analysis performed individually, .e. -processing NegVC (logical) FALSE = negative variance component estimates (VC) set 0 contribute total variance (done SAS PROC NESTED, conservative estimate total variance). original ANOVA estimates can found element 'VCoriginal'. degrees freedom total variance based adapted mean squares (MS), .e. adapted MS computed \\(D * VC\\), VC column vector negative VCs set 0. TRUE = negative variance component estimates set 0 contribute total variance (original definition total variance). VarVC.method (character) string specifying whether use algorithm given Searle et al. (1992) corresponds VarVC.method=\"scm\" Giesbrecht Burns (1985) can specified via \"gb\". Method \"scm\" (Searle, Casella, McCulloch) exact algorithm, \"gb\" (Giesbrecht, Burns) termed \"rough approximation\" authors, sufficiently exact compared e.g. SAS PROC MIXED (method=type1) uses inverse Fisher-Information matrix approximation. balanced designs methods give identical results, unbalanced designs differences occur. MME (logical) TRUE = (M)ixed (M)odel (E)quations solved, .e. 'VCA' object additional elements \"RandomEffects\", \"FixedEffects\", \"VarFixed\" (variance-covariance matrix fixed effects) \"Matrices\" element addional elements corresponding intermediate results solving MMEs. FALSE = solve MMEs, reduces computation time complex models significantly. quiet (logical) TRUE = suppress warning, issued otherwise order.data (logical) TRUE = class-variables ordered increasingly, FALSE = ordering class-variables remain ","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/anovaVCA.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","text":"class VCA downstream analysis.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/anovaVCA.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"ANOVA-Type Estimation of Variance Components for Random Models — anovaVCA","text":"","code":"data(glucose) anovaVCA(value ~ day / run, glucose) #> #> #> Result Variance Component Analysis: #> ----------------------------------- #> #> Name DF SS MS VC %Total SD CV[%] #> 1 total 64.77732 12.933553 100 3.596325 1.472697 #> 2 day 19 415.8 21.884211 1.958553 15.143191 1.399483 0.573089 #> 3 day:run 20 281 14.05 3.075 23.77537 1.753568 0.718087 #> 4 error 40 316 7.9 7.9 61.081439 2.810694 1.15098 #> #> Mean: 244.2 (N = 80) #> #> Experimental Design: balanced | Method: ANOVA #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":null,"dir":"Reference","previous_headings":"","what":"AUC Test for Paired Two-sample Measurements — aucTest","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"function compares two AUC paired two-sample diagnostic assays standardized difference method, little difference SE calculation unpaired design. order compare two assays, function provides three assessments including 'difference', 'non-inferiority' 'superiority'. method comparing referred Liu(2006)'s article can found reference section .","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"","code":"aucTest( x, y, response, h0 = 0, conf.level = 0.95, method = c(\"difference\", \"non-inferiority\", \"superiority\"), ... )"},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"x (numeric) reference/standard diagnostic assay. y (numeric) test diagnostic assay. response (numeric factor) vector responses represent type classes, typically encoded 0(controls) 1(cases). h0 (numeric) specified hypothesized value margin two assays, default 0 difference method. select non-inferiority method, h0 negative value. select superiority method, non-negative value. conf.level (numeric) significance level 0 1 (non-inclusive) returned confidence interval. method (string) string specifying type hypothesis test, must one \"difference\" (default), \"non-inferiority\" \"superiority\". ... arguments passed pROC::roc().","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"RefInt object contains relevant results comparing paired ROC two-sample assays.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"samples considered independent, paired design, SE can computed method Delong provided pROC package. aucTest function use standardized difference approach Liu(2006) publication compute SE corresponding hypothesis test statistic paired design study. difference test difference two diagnostic tests, default h0 zero. non-inferiority test new diagnostic tests worse standard diagnostic test specific margin, time maybe safer, easier administer cost less. superiority test test new diagnostic tests better standard diagnostic test specific margin(default zero), better efficacy.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"test significance difference equal result EP24A2 Appendix D. Table D2. Table D2 uses method Hanley & McNeil (1982), whereas function uses method DeLong et al. (1988), results difference SE. Thus corresponding Z statistic P value equal well.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"Jen-Pei Liu (2006) \"Tests equivalence non-inferiority diagnostic accuracy based paired areas ROC curves\". Statist. Med. , 25:1219–1238. DOI: 10.1002/sim.2358.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/aucTest.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"AUC Test for Paired Two-sample Measurements — aucTest","text":"","code":"data(\"ldlroc\") # H0 : Difference between areas = 0: aucTest(x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing difference based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is difference to 0 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 3.0088 #> Pvalue: 0.002623 # H0 : Superiority margin <= 0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = \"superiority\", h0 = 0.1 ) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing superiority based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is superiority to 0.1 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 1.7436 #> Pvalue: 0.04061 # H0 : Non-inferiority margin <= -0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = \"non-inferiority\", h0 = -0.1 ) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing non-inferiority based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is non-inferiority to -0.1 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 4.2739 #> Pvalue: 9.606e-06"},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"Draw ggplot-based difference Bland-Altman plot reference assay vs. test assay BAsummary object, regression plot MCResult. Also Providing necessary useful option arguments presentation.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"","code":"autoplot(object, ...) # S4 method for BAsummary autoplot( object, type = c(\"absolute\", \"relative\"), color = \"black\", fill = \"lightgray\", size = 1.5, shape = 21, jitter = FALSE, ref.line = TRUE, ref.line.params = list(col = \"blue\", linetype = \"solid\", size = 1), ci.line = FALSE, ci.line.params = list(col = \"blue\", linetype = \"dashed\"), loa.line = TRUE, loa.line.params = list(col = \"blue\", linetype = \"dashed\"), label = TRUE, label.digits = 4, label.params = list(col = \"black\", size = 4), x.nbreak = NULL, y.nbreak = NULL, x.title = NULL, y.title = NULL, main.title = NULL ) # S4 method for MCR autoplot( object, color = \"black\", fill = \"lightgray\", size = 1.5, shape = 21, jitter = FALSE, identity = TRUE, identity.params = list(col = \"gray\", linetype = \"dashed\"), reg = TRUE, reg.params = list(col = \"blue\", linetype = \"solid\"), equal.axis = FALSE, legend.title = TRUE, legend.digits = 2, x.nbreak = NULL, y.nbreak = NULL, x.title = NULL, y.title = NULL, main.title = NULL )"},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"object (BAsummary, MCResult) input, depending function done, blandAltman() mcreg(). ... used. type (string) difference type input, default 'absolute'. color, fill (string) point colors. size (numeric) size points. shape (integer) ggplot shape points. jitter (logical) whether add small amount random variation location points. ref.line (logical) whether plot 'mean' line, default TRUE. ref.line.params, ci.line.params, loa.line.params (list) parameters (color, linetype, linewidth) argument 'ref.line', 'ci.line' 'loa.line'; eg. ref.line.params = list(col = \"blue\", linetype = \"solid\", linewidth = 1). ci.line (logical) whether plot confidence interval line 'mean', default FALSE. loa.line (logical) whether plot limit agreement line, default TRUE. label (logical) whether add specific value label line (ref.line, ci.line loa.line). shown line defined TRUE. label.digits (integer) number digits decimal point label. label.params (list) parameters (color, size, fontface) argument 'label'. x.nbreak, y.nbreak (integer) integer guiding number major breaks x-axis y-axis. x.title, y.title, main.title (string) x-axis, y-axis main title plot. identity (logical) whether add identity line, default TRUE. identity.params, reg.params (list) parameters (color, linetype) argument 'identity' 'reg'; eg. identity.params = list(col = \"gray\", linetype = \"dashed\"). reg (logical) whether add regression line slope intercept obtained mcr::mcreg() function, default TRUE. equal.axis (logical) whether adjust ranges x-axis y-axis identical. equal.axis = TRUE, x-axis equal y-axis. legend.title (logical) whether present title legend. legend.digits (integer) number digits decimal point legend.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"ggplot based Bland-Altman plot regression plot can easily customized using additional ggplot functions.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"like alter part autoplot function provided, adding ggplot statements suggested.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/autoplot.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate a ggplot for Bland-Altman Plot and Regression Plot — autoplot","text":"","code":"# Specify the type for difference plot data(\"platelet\") object <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) autoplot(object) autoplot(object, type = \"relative\") # Set the addition parameters for `geom_point` autoplot(object, type = \"relative\", jitter = TRUE, fill = \"lightblue\", color = \"grey\", size = 2 ) # Set the color and line type for reference and limits of agreement lines autoplot(object, type = \"relative\", ref.line.params = list(col = \"red\", linetype = \"solid\"), loa.line.params = list(col = \"grey\", linetype = \"solid\") ) # Set label color, size and digits autoplot(object, type = \"absolute\", ref.line.params = list(col = \"grey\"), loa.line.params = list(col = \"grey\"), label.digits = 2, label.params = list(col = \"grey\", size = 3, fontface = \"italic\") ) # Add main title, X and Y axis titles, and adjust X ticks. autoplot(object, type = \"absolute\", x.nbreak = 6, main.title = \"Bland-Altman Plot\", x.title = \"Mean of Test and Reference Methods\", y.title = \"Reference - Test\" ) if (FALSE) { # Using the default arguments for regression plot data(\"platelet\") fit <- mcreg2( x = platelet$Comparative, y = platelet$Candidate, method.reg = \"Deming\", method.ci = \"jackknife\" ) autoplot(fit) # Only present the regression line and alter the color and shape. autoplot(fit, identity = FALSE, reg.params = list(col = \"grey\", linetype = \"dashed\"), legend.title = FALSE, legend.digits = 4 ) }"},{"path":"https://kaigu1990.github.io/mcradds/reference/blandAltman.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate Statistics for Bland-Altman — blandAltman","title":"Calculate Statistics for Bland-Altman — blandAltman","text":"Calculate Bland-Altman related statistics specific difference type, difference, limited agreement confidence interval. outlier detecting function graphic function get difference result .","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/blandAltman.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate Statistics for Bland-Altman — blandAltman","text":"","code":"blandAltman(x, y, sid = NULL, type1 = 3, type2 = 5, conf.level = 0.95)"},{"path":"https://kaigu1990.github.io/mcradds/reference/blandAltman.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate Statistics for Bland-Altman — blandAltman","text":"x (numeric) reference method. y (numeric) test method. sid (numeric string) sample id. type1 (integer) specifying specific difference absolute difference, default 3. type2 (integer) specifying specific difference relative difference, default 5. conf.level (numeric) significance level two side, default 0.95.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/blandAltman.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate Statistics for Bland-Altman — blandAltman","text":"object BAsummary class contains BlandAltman analysis. data data frame contains raw data input. stat list contains summary table (tab) Bland-Altman analysis, vector (absolute_diff) absolute difference vector (relative_diff) relative difference.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/blandAltman.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculate Statistics for Bland-Altman — blandAltman","text":"","code":"data(\"platelet\") blandAltman(x = platelet$Comparative, y = platelet$Candidate) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/(0.5*(X+Y)) #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.064 ( 0.145) #> Median 6.350 0.055 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.118) #> Min, Max (-47.800, 42.100) (-0.412, 0.667) #> Limit of Agreement (-24.011, 38.671) (-0.220, 0.347) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.038, 0.089) # with sample id as input sid blandAltman(x = platelet$Comparative, y = platelet$Candidate, sid = platelet$Sample) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate, #> sid = platelet$Sample) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/(0.5*(X+Y)) #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.064 ( 0.145) #> Median 6.350 0.055 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.118) #> Min, Max (-47.800, 42.100) (-0.412, 0.667) #> Limit of Agreement (-24.011, 38.671) (-0.220, 0.347) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.038, 0.089) # Specifiy the type for difference blandAltman(x = platelet$Comparative, y = platelet$Candidate, type1 = 1, type2 = 4) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate, #> type1 = 1, type2 = 4) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/X #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.078 ( 0.173) #> Median 6.350 0.056 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.125) #> Min, Max (-47.800, 42.100) (-0.341, 1.000) #> Limit of Agreement (-24.011, 38.671) (-0.261, 0.417) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.047, 0.109)"},{"path":"https://kaigu1990.github.io/mcradds/reference/calcium.html","id":null,"dir":"Reference","previous_headings":"","what":"Reference Interval Data — calcium","title":"Reference Interval Data — calcium","text":"example calcium can used compute reference range Calcium 240 medical students sex.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/calcium.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Reference Interval Data — calcium","text":"","code":"calcium"},{"path":"https://kaigu1990.github.io/mcradds/reference/calcium.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Reference Interval Data — calcium","text":"calcium data set contains 240 observations 3 variables. Sample Sample id Value Measurements target subjects Group Sex group target subjects","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/calcium.html","id":"source","dir":"Reference","previous_headings":"","what":"Source","title":"Reference Interval Data — calcium","text":"CLSI-EP28A3 Table 4. cited data set.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/cat_with_newline.html","id":null,"dir":"Reference","previous_headings":"","what":"Concatenate and Print with Newline — cat_with_newline","title":"Concatenate and Print with Newline — cat_with_newline","text":"function concatenates inputs like cat() prints newline.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/cat_with_newline.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Concatenate and Print with Newline — cat_with_newline","text":"","code":"cat_with_newline(...)"},{"path":"https://kaigu1990.github.io/mcradds/reference/cat_with_newline.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Concatenate and Print with Newline — cat_with_newline","text":"... inputs concatenate.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/cat_with_newline.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Concatenate and Print with Newline — cat_with_newline","text":"None, used side effect producing concatenated output R console.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/cat_with_newline.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Concatenate and Print with Newline — cat_with_newline","text":"","code":"cat_with_newline(\"hello\", \"world\") #> hello world"},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":null,"dir":"Reference","previous_headings":"","what":"Creates Contingency Table — diagTab","title":"Creates Contingency Table — diagTab","text":"Creates 2x2 contingency table data frame matrix qualitative performance reader precision downstream analysis.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Creates Contingency Table — diagTab","text":"","code":"diagTab( formula = ~., data, bysort = NULL, dimname = NULL, levels = NULL, rep = FALSE, across = NULL )"},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Creates Contingency Table — diagTab","text":"formula (numeric) formula object cross-classifying variables (separated +) right hand side. data wide structure, row name contingency represented variable left + sign, col name right. data long structure, classified variable put left formula, value variable put right. data (data.frame matrix) data frame matrix. bysort (string) sorted variable col names data, grouped variable reproducibility analysis. dimname (vector) character vector define row name contingency table first variable, col name second variable. levels (vector) vector known levels measurements. rep (logical) whether implement reproducibility like reader precision . across (string) across variable split original data set subsets. -reader within-reader precision's across variable site commonly.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Creates Contingency Table — diagTab","text":"object matrix contains 2x2 contingency table.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Creates Contingency Table — diagTab","text":"attention like generate 2x2 contingency table reproducibility analysis, original data long structure using corresponding formula.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/diagTab.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Creates Contingency Table — diagTab","text":"","code":"# For qualitative performance with wide data structure data(\"qualData\") qualData %>% diagTab(formula = ~ CandidateN + ComparativeN) #> Contingency Table: #> #> levels: 0 1 #> ComparativeN #> CandidateN 0 1 #> 0 54 16 #> 1 8 122 qualData %>% diagTab( formula = ~ CandidateN + ComparativeN, levels = c(1, 0) ) #> Contingency Table: #> #> levels: 1 0 #> ComparativeN #> CandidateN 1 0 #> 1 122 8 #> 0 16 54 # For qualitative performance with long data structure dummy <- data.frame( id = c(\"1001\", \"1001\", \"1002\", \"1002\", \"1003\", \"1003\"), value = c(1, 0, 0, 0, 1, 1), type = c(\"Test\", \"Ref\", \"Test\", \"Ref\", \"Test\", \"Ref\") ) dummy %>% diagTab( formula = type ~ value, bysort = \"id\", dimname = c(\"Test\", \"Ref\"), levels = c(1, 0) ) #> Contingency Table: #> #> levels: 1 0 #> Ref #> Test 1 0 #> 1 1 1 #> 0 0 1 # For Between-Reader precision performance data(\"PDL1RP\") reader <- PDL1RP$btw_reader reader %>% diagTab( formula = Reader ~ Value, bysort = \"Sample\", levels = c(\"Positive\", \"Negative\"), rep = TRUE, across = \"Site\" ) #> Contingency Table: #> #> levels: Positive Negative #> Pairwise2 #> Pairwise1 Positive Negative #> Positive 200 7 #> Negative 15 228"},{"path":"https://kaigu1990.github.io/mcradds/reference/dixon_outlier.html","id":null,"dir":"Reference","previous_headings":"","what":"Detect Dixon Outlier — dixon_outlier","title":"Detect Dixon Outlier — dixon_outlier","text":"Help function detects potential outlier Dixon method, following rules EP28A3 NMPA guideline establishment reference range.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/dixon_outlier.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Detect Dixon Outlier — dixon_outlier","text":"","code":"dixon_outlier(x)"},{"path":"https://kaigu1990.github.io/mcradds/reference/dixon_outlier.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Detect Dixon Outlier — dixon_outlier","text":"x (numeric) numeric input.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/dixon_outlier.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Detect Dixon Outlier — dixon_outlier","text":"list contains outliers vector without outliers.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/dixon_outlier.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Detect Dixon Outlier — dixon_outlier","text":"","code":"x <- c(13.6, 44.4, 45.9, 11.9, 41.9, 53.3, 44.7, 95.2, 44.1, 50.7, 45.2, 60.1, 89.1) dixon_outlier(x) #> $ord #> [1] 1 4 8 13 #> #> $out #> [1] 13.6 11.9 95.2 89.1 #> #> $subset #> [1] 44.4 45.9 41.9 53.3 44.7 44.1 50.7 45.2 60.1 #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/esd.critical.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute Critical Value for ESD Test — esd.critical","title":"Compute Critical Value for ESD Test — esd.critical","text":"helper function find lambda potential outliers iteration.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/esd.critical.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute Critical Value for ESD Test — esd.critical","text":"","code":"esd.critical(alpha, N, i)"},{"path":"https://kaigu1990.github.io/mcradds/reference/esd.critical.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute Critical Value for ESD Test — esd.critical","text":"alpha (numeric) type--risk, \\(\\alpha\\). N (integer) total number samples. (integer) iteration number, less number biggest potential outliers.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/esd.critical.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compute Critical Value for ESD Test — esd.critical","text":"lambda value calculated formula.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/esd.critical.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compute Critical Value for ESD Test — esd.critical","text":"","code":"esd.critical(alpha = 0.05, N = 100, i = 1) #> [1] 3.384083"},{"path":"https://kaigu1990.github.io/mcradds/reference/getAccuracy.html","id":null,"dir":"Reference","previous_headings":"","what":"Summary Method for MCTab Objects — getAccuracy","title":"Summary Method for MCTab Objects — getAccuracy","text":"Provides concise summary content MCTab objects. Computes sensitivity, specificity, positive negative predictive values positive negative likelihood ratios diagnostic test reference/gold standard. Computes positive/negative percent agreement, overall percent agreement new test evaluated comparison non-reference standard. Computes average positive/negative agreement tests reference, paired reader precision.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getAccuracy.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summary Method for MCTab Objects — getAccuracy","text":"","code":"getAccuracy(object, ...) # S4 method for MCTab getAccuracy( object, ref = c(\"r\", \"nr\", \"bnr\"), alpha = 0.05, r_ci = c(\"wilson\", \"wald\", \"clopper-pearson\"), nr_ci = c(\"wilson\", \"wald\", \"clopper-pearson\"), bnr_ci = \"bootstrap\", bootCI = c(\"perc\", \"norm\", \"basic\", \"stud\", \"bca\"), nrep = 1000, rng.seed = NULL, digits = 4, ... )"},{"path":"https://kaigu1990.github.io/mcradds/reference/getAccuracy.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summary Method for MCTab Objects — getAccuracy","text":"object (MCTab) input diagTab function create 2x2 contingency table. ... arguments passed DescTools::BinomCI. ref (character) reference condition. possible choose one condition require. r indicates comparative test standard reference, nr indicates comparative test standard reference, bnr indicates new test comparative test references. alpha (numeric) type--risk, \\(\\alpha\\). r_ci (string) string specifying method calculate confidence interval diagnostic test reference/gold standard. Default wilson. Options can wilson, wald clopper-pearson, see DescTools::BinomCI. nr_ci (string) string specifying method calculate confidence interval comparative test non-reference standard. Default wilson. Options can wilson, wald clopper-pearson, see DescTools::BinomCI. bnr_ci (string) string specifying method calculate confidence interval tests reference like reader precision. Default bootstrap. point estimate ANA APA equal 0 100%, method changed transformed wilson. bootCI (string) string specifying bootstrap confidence interval boot.ci() function boot package. Default perc(bootstrap percentile), options can norm(normal approximation), boot(basic bootstrap), stud(studentized bootstrap) bca(adjusted bootstrap percentile). nrep (integer) number replicates bootstrapping, default 1000. rng.seed (integer) number random number generator seed bootstrap sampling. set NULL currently R session used RNG setting used. digits (integer) desired number digits. Default 4.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getAccuracy.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summary Method for MCTab Objects — getAccuracy","text":"data frame contains qualitative diagnostic accuracy criteria three columns estimated value confidence interval. sens: Sensitivity refers often test positive condition interest present. spec: Specificity refers often test negative condition interest absent. ppv: Positive predictive value refers percentage subjects positive test result target condition. npv: Negative predictive value refers percentage subjects negative test result target condition. plr: Positive likelihood ratio refers probability true positive rate divided false negative rate. nlr: Negative likelihood ratio refers probability false positive rate divided true negative rate. ppa: Positive percent agreement, equals sensitivity candidate method evaluated comparison comparative method, reference/gold standard. npa: Negative percent agreement, equals specificity candidate method evaluated comparison comparative method, reference/gold standard. opa: Overall percent agreement. apa: Average positive agreement refers positive agreements can regarded weighted ppa. ana: Average negative agreement refers negative agreements can regarded weighted npa.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getAccuracy.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Summary Method for MCTab Objects — getAccuracy","text":"","code":"# For qualitative performance data(\"qualData\") tb <- qualData %>% diagTab( formula = ~ CandidateN + ComparativeN, levels = c(1, 0) ) getAccuracy(tb, ref = \"r\") #> EST LowerCI UpperCI #> sens 0.8841 0.8200 0.9274 #> spec 0.8710 0.7655 0.9331 #> ppv 0.9385 0.8833 0.9685 #> npv 0.7714 0.6605 0.8541 #> plr 6.8514 3.5785 13.1181 #> nlr 0.1331 0.0832 0.2131 getAccuracy(tb, ref = \"nr\", nr_ci = \"wilson\") #> EST LowerCI UpperCI #> ppa 0.8841 0.8200 0.9274 #> npa 0.8710 0.7655 0.9331 #> opa 0.8800 0.8277 0.9180 # For Between-Reader precision performance data(\"PDL1RP\") reader <- PDL1RP$btw_reader tb2 <- reader %>% diagTab( formula = Reader ~ Value, bysort = \"Sample\", levels = c(\"Positive\", \"Negative\"), rep = TRUE, across = \"Site\" ) getAccuracy(tb2, ref = \"bnr\") #> EST LowerCI UpperCI #> apa 0.9479 0.9245 0.9671 #> ana 0.9540 0.9336 0.9714 #> opa 0.9511 0.9311 0.9689 getAccuracy(tb2, ref = \"bnr\", rng.seed = 12306) #> EST LowerCI UpperCI #> apa 0.9479 0.9260 0.9686 #> ana 0.9540 0.9342 0.9730 #> opa 0.9511 0.9311 0.9711"},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":null,"dir":"Reference","previous_headings":"","what":"Detect Outliers From BAsummary Object — getOutlier","title":"Detect Outliers From BAsummary Object — getOutlier","text":"Detect potential outliers absolute relative differences BAsummary object 4E ESD method.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Detect Outliers From BAsummary Object — getOutlier","text":"","code":"getOutlier(object, ...) # S4 method for BAsummary getOutlier( object, method = c(\"ESD\", \"4E\"), difference = c(\"abs\", \"rel\"), alpha = 0.05, h = 5 )"},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Detect Outliers From BAsummary Object — getOutlier","text":"object (BAsummary) input blandAltman function generate Bland-Altman analysis result contains absolute relative differences. ... used. method (string) string specifying method use. Default ESD. difference (string) string specifying difference type use ESD method. Default abs means absolute difference, rel relative difference. alpha (numeric) type--risk. used method defined ESD. h (integer) positive integer indicating number suspected outliers. used method defined ESD.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Detect Outliers From BAsummary Object — getOutlier","text":"list contains statistics results (stat), outliers' ord id (ord), sample id (sid), matrix outliers (outmat) matrix without outliers (rmmat).","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Detect Outliers From BAsummary Object — getOutlier","text":"Bland-Altman analysis used input data regardless 4E ESD method necessary determine absolute relative differences beforehand. 4E method, absolute relative differences required define, bias exceeds 4 fold absolute relative differences. However ESD method, one necessary (latter recommended), bias needs meet ESD test.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/getOutlier.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Detect Outliers From BAsummary Object — getOutlier","text":"","code":"data(\"platelet\") # Using `blandAltman` function with default arguments ba <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) getOutlier(ba, method = \"ESD\", difference = \"rel\") #> $stat #> i Mean SD x Obs ESDi Lambda Outlier #> 1 1 0.06356753 0.1447540 0.6666667 1 4.166372 3.445148 TRUE #> 2 2 0.05849947 0.1342496 0.5783972 4 3.872621 3.442394 TRUE #> 3 3 0.05409356 0.1258857 0.5321101 2 3.797226 3.439611 TRUE #> 4 4 0.05000794 0.1183096 -0.4117647 10 3.903086 3.436800 TRUE #> 5 5 0.05398874 0.1106738 -0.3132530 14 3.318236 3.433961 FALSE #> 6 6 0.05718215 0.1056542 -0.2566372 23 2.970250 3.431092 FALSE #> #> $ord #> [1] 1 4 2 10 #> #> $sid #> [1] 1 4 2 10 #> #> $outmat #> sid x y #> 1 1 1.5 3.0 #> 2 2 4.0 6.9 #> 3 4 10.2 18.5 #> 4 10 16.4 10.8 #> #> $rmmat #> sid x y #> 1 3 9.2 8.0 #> 2 5 11.2 9.0 #> 3 6 12.4 13.0 #> 4 7 14.8 19.7 #> 5 8 14.8 16.0 #> 6 9 15.9 21.9 #> 7 11 17.6 22.6 #> 8 12 18.1 15.9 #> 9 13 18.1 20.0 #> 10 14 19.2 14.0 #> 11 15 19.6 25.9 #> 12 16 19.9 21.8 #> 13 17 20.4 24.5 #> 14 18 21.2 29.2 #> 15 19 22.0 27.0 #> 16 20 22.2 24.0 #> 17 21 23.4 25.8 #> 18 22 25.2 22.0 #> 19 23 25.5 19.7 #> 20 24 25.6 33.4 #> 21 25 26.3 30.0 #> 22 26 26.4 28.9 #> 23 27 27.5 34.3 #> 24 28 28.2 34.3 #> 25 29 30.3 35.8 #> 26 30 31.4 37.8 #> 27 31 32.9 37.1 #> 28 32 33.9 40.3 #> 29 33 34.3 37.1 #> 30 34 35.3 40.0 #> 31 35 38.4 42.2 #> 32 36 39.2 49.3 #> 33 37 48.2 41.0 #> 34 38 49.0 55.0 #> 35 39 51.3 55.0 #> 36 40 52.2 64.6 #> 37 41 60.2 54.8 #> 38 42 61.5 64.6 #> 39 43 78.0 78.6 #> 40 44 80.6 91.4 #> 41 45 84.4 65.7 #> 42 46 85.3 97.2 #> 43 47 89.0 100.0 #> 44 48 92.6 103.2 #> 45 49 94.9 89.6 #> 46 50 108.6 123.4 #> 47 51 110.4 115.0 #> 48 52 115.6 124.4 #> 49 53 116.9 138.1 #> 50 54 122.7 139.2 #> 51 55 143.6 166.8 #> 52 56 146.1 143.7 #> 53 57 146.2 150.8 #> 54 58 154.5 178.5 #> 55 59 161.7 183.4 #> 56 60 167.7 176.1 #> 57 61 176.6 173.7 #> 58 62 179.7 180.4 #> 59 63 188.9 198.9 #> 60 64 189.0 199.4 #> 61 65 197.9 211.1 #> 62 66 201.7 220.1 #> 63 67 207.7 218.3 #> 64 68 209.2 223.4 #> 65 69 210.5 196.8 #> 66 70 210.9 223.8 #> 67 71 214.1 232.2 #> 68 72 218.6 237.1 #> 69 73 232.9 247.9 #> 70 74 235.0 227.0 #> 71 75 237.8 235.3 #> 72 76 246.1 283.0 #> 73 77 252.6 263.5 #> 74 78 254.9 283.5 #> 75 79 261.4 272.3 #> 76 80 262.4 256.6 #> 77 81 270.1 289.2 #> 78 82 271.3 265.7 #> 79 83 273.5 264.5 #> 80 84 274.2 262.2 #> 81 85 281.1 271.1 #> 82 86 297.0 311.7 #> 83 87 298.7 296.5 #> 84 88 326.7 310.2 #> 85 89 327.1 362.1 #> 86 90 329.6 368.5 #> 87 91 332.8 370.6 #> 88 92 337.4 379.5 #> 89 93 340.1 358.3 #> 90 94 364.8 390.6 #> 91 95 370.1 408.4 #> 92 96 390.6 371.0 #> 93 97 395.7 431.7 #> 94 98 419.3 438.7 #> 95 99 421.3 382.3 #> 96 100 426.3 441.8 #> 97 101 440.4 455.6 #> 98 102 443.4 465.8 #> 99 103 446.2 416.4 #> 100 104 462.7 480.3 #> 101 105 467.7 470.7 #> 102 106 507.4 496.7 #> 103 107 568.3 595.9 #> 104 108 599.6 611.0 #> 105 109 613.8 622.3 #> 106 110 633.5 641.3 #> 107 111 678.6 717.5 #> 108 112 687.6 714.9 #> 109 113 695.1 647.3 #> 110 114 701.0 725.6 #> 111 115 708.3 729.5 #> 112 116 735.6 754.5 #> 113 117 794.8 768.5 #> 114 118 937.0 901.6 #> 115 119 1031.9 1068.0 #> 116 120 1239.3 1279.0 #> # Using sample id as input ba2 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate, sid = platelet$Sample) getOutlier(ba2, method = \"ESD\", difference = \"rel\") #> $stat #> i Mean SD x Obs ESDi Lambda Outlier #> 1 1 0.06356753 0.1447540 0.6666667 1 4.166372 3.445148 TRUE #> 2 2 0.05849947 0.1342496 0.5783972 4 3.872621 3.442394 TRUE #> 3 3 0.05409356 0.1258857 0.5321101 2 3.797226 3.439611 TRUE #> 4 4 0.05000794 0.1183096 -0.4117647 10 3.903086 3.436800 TRUE #> 5 5 0.05398874 0.1106738 -0.3132530 14 3.318236 3.433961 FALSE #> 6 6 0.05718215 0.1056542 -0.2566372 23 2.970250 3.431092 FALSE #> #> $ord #> [1] 1 4 2 10 #> #> $sid #> [1] \"ID1\" \"ID4\" \"ID2\" \"ID10\" #> #> $outmat #> sid x y #> 1 ID1 1.5 3 #> 2 ID2 4 6.9 #> 3 ID4 10.2 18.5 #> 4 ID10 16.4 10.8 #> #> $rmmat #> sid x y #> 1 ID3 9.2 8 #> 2 ID5 11.2 9 #> 3 ID6 12.4 13 #> 4 ID7 14.8 19.7 #> 5 ID8 14.8 16 #> 6 ID9 15.9 21.9 #> 7 ID11 17.6 22.6 #> 8 ID12 18.1 15.9 #> 9 ID13 18.1 20 #> 10 ID14 19.2 14 #> 11 ID15 19.6 25.9 #> 12 ID16 19.9 21.8 #> 13 ID17 20.4 24.5 #> 14 ID18 21.2 29.2 #> 15 ID19 22 27 #> 16 ID20 22.2 24 #> 17 ID21 23.4 25.8 #> 18 ID22 25.2 22 #> 19 ID23 25.5 19.7 #> 20 ID24 25.6 33.4 #> 21 ID25 26.3 30 #> 22 ID26 26.4 28.9 #> 23 ID27 27.5 34.3 #> 24 ID28 28.2 34.3 #> 25 ID29 30.3 35.8 #> 26 ID30 31.4 37.8 #> 27 ID31 32.9 37.1 #> 28 ID32 33.9 40.3 #> 29 ID33 34.3 37.1 #> 30 ID34 35.3 40 #> 31 ID35 38.4 42.2 #> 32 ID36 39.2 49.3 #> 33 ID37 48.2 41 #> 34 ID38 49 55 #> 35 ID39 51.3 55 #> 36 ID40 52.2 64.6 #> 37 ID41 60.2 54.8 #> 38 ID42 61.5 64.6 #> 39 ID43 78 78.6 #> 40 ID44 80.6 91.4 #> 41 ID45 84.4 65.7 #> 42 ID46 85.3 97.2 #> 43 ID47 89 100 #> 44 ID48 92.6 103.2 #> 45 ID49 94.9 89.6 #> 46 ID50 108.6 123.4 #> 47 ID51 110.4 115 #> 48 ID52 115.6 124.4 #> 49 ID53 116.9 138.1 #> 50 ID54 122.7 139.2 #> 51 ID55 143.6 166.8 #> 52 ID56 146.1 143.7 #> 53 ID57 146.2 150.8 #> 54 ID58 154.5 178.5 #> 55 ID59 161.7 183.4 #> 56 ID60 167.7 176.1 #> 57 ID61 176.6 173.7 #> 58 ID62 179.7 180.4 #> 59 ID63 188.9 198.9 #> 60 ID64 189 199.4 #> 61 ID65 197.9 211.1 #> 62 ID66 201.7 220.1 #> 63 ID67 207.7 218.3 #> 64 ID68 209.2 223.4 #> 65 ID69 210.5 196.8 #> 66 ID70 210.9 223.8 #> 67 ID71 214.1 232.2 #> 68 ID72 218.6 237.1 #> 69 ID73 232.9 247.9 #> 70 ID74 235 227 #> 71 ID75 237.8 235.3 #> 72 ID76 246.1 283 #> 73 ID77 252.6 263.5 #> 74 ID78 254.9 283.5 #> 75 ID79 261.4 272.3 #> 76 ID80 262.4 256.6 #> 77 ID81 270.1 289.2 #> 78 ID82 271.3 265.7 #> 79 ID83 273.5 264.5 #> 80 ID84 274.2 262.2 #> 81 ID85 281.1 271.1 #> 82 ID86 297 311.7 #> 83 ID87 298.7 296.5 #> 84 ID88 326.7 310.2 #> 85 ID89 327.1 362.1 #> 86 ID90 329.6 368.5 #> 87 ID91 332.8 370.6 #> 88 ID92 337.4 379.5 #> 89 ID93 340.1 358.3 #> 90 ID94 364.8 390.6 #> 91 ID95 370.1 408.4 #> 92 ID96 390.6 371 #> 93 ID97 395.7 431.7 #> 94 ID98 419.3 438.7 #> 95 ID99 421.3 382.3 #> 96 ID100 426.3 441.8 #> 97 ID101 440.4 455.6 #> 98 ID102 443.4 465.8 #> 99 ID103 446.2 416.4 #> 100 ID104 462.7 480.3 #> 101 ID105 467.7 470.7 #> 102 ID106 507.4 496.7 #> 103 ID107 568.3 595.9 #> 104 ID108 599.6 611 #> 105 ID109 613.8 622.3 #> 106 ID110 633.5 641.3 #> 107 ID111 678.6 717.5 #> 108 ID112 687.6 714.9 #> 109 ID113 695.1 647.3 #> 110 ID114 701 725.6 #> 111 ID115 708.3 729.5 #> 112 ID116 735.6 754.5 #> 113 ID117 794.8 768.5 #> 114 ID118 937 901.6 #> 115 ID119 1031.9 1068 #> 116 ID120 1239.3 1279 #> # Using `blandAltman` function when the `tyep2` is 2 with `X vs. (Y-X)/X` difference ba3 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate, type2 = 4) getOutlier(ba3, method = \"ESD\", difference = \"rel\") #> $stat #> i Mean SD x Obs ESDi Lambda Outlier #> 1 1 0.07824269 0.1730707 1.0000000 1 5.325900 3.445148 TRUE #> 2 2 0.07049683 0.1514810 0.8137255 4 4.906415 3.442394 TRUE #> 3 3 0.06419828 0.1355778 0.7250000 2 4.873967 3.439611 TRUE #> 4 4 0.05855040 0.1214221 -0.3414634 10 3.294407 3.436800 FALSE #> 5 5 0.06199880 0.1160523 -0.2708333 14 2.867950 3.433961 FALSE #> 6 6 0.06489299 0.1122769 0.3773585 18 2.782991 3.431092 FALSE #> #> $ord #> [1] 1 4 2 #> #> $sid #> [1] 1 4 2 #> #> $outmat #> sid x y #> 1 1 1.5 3.0 #> 2 2 4.0 6.9 #> 3 4 10.2 18.5 #> #> $rmmat #> sid x y #> 1 3 9.2 8.0 #> 2 5 11.2 9.0 #> 3 6 12.4 13.0 #> 4 7 14.8 19.7 #> 5 8 14.8 16.0 #> 6 9 15.9 21.9 #> 7 10 16.4 10.8 #> 8 11 17.6 22.6 #> 9 12 18.1 15.9 #> 10 13 18.1 20.0 #> 11 14 19.2 14.0 #> 12 15 19.6 25.9 #> 13 16 19.9 21.8 #> 14 17 20.4 24.5 #> 15 18 21.2 29.2 #> 16 19 22.0 27.0 #> 17 20 22.2 24.0 #> 18 21 23.4 25.8 #> 19 22 25.2 22.0 #> 20 23 25.5 19.7 #> 21 24 25.6 33.4 #> 22 25 26.3 30.0 #> 23 26 26.4 28.9 #> 24 27 27.5 34.3 #> 25 28 28.2 34.3 #> 26 29 30.3 35.8 #> 27 30 31.4 37.8 #> 28 31 32.9 37.1 #> 29 32 33.9 40.3 #> 30 33 34.3 37.1 #> 31 34 35.3 40.0 #> 32 35 38.4 42.2 #> 33 36 39.2 49.3 #> 34 37 48.2 41.0 #> 35 38 49.0 55.0 #> 36 39 51.3 55.0 #> 37 40 52.2 64.6 #> 38 41 60.2 54.8 #> 39 42 61.5 64.6 #> 40 43 78.0 78.6 #> 41 44 80.6 91.4 #> 42 45 84.4 65.7 #> 43 46 85.3 97.2 #> 44 47 89.0 100.0 #> 45 48 92.6 103.2 #> 46 49 94.9 89.6 #> 47 50 108.6 123.4 #> 48 51 110.4 115.0 #> 49 52 115.6 124.4 #> 50 53 116.9 138.1 #> 51 54 122.7 139.2 #> 52 55 143.6 166.8 #> 53 56 146.1 143.7 #> 54 57 146.2 150.8 #> 55 58 154.5 178.5 #> 56 59 161.7 183.4 #> 57 60 167.7 176.1 #> 58 61 176.6 173.7 #> 59 62 179.7 180.4 #> 60 63 188.9 198.9 #> 61 64 189.0 199.4 #> 62 65 197.9 211.1 #> 63 66 201.7 220.1 #> 64 67 207.7 218.3 #> 65 68 209.2 223.4 #> 66 69 210.5 196.8 #> 67 70 210.9 223.8 #> 68 71 214.1 232.2 #> 69 72 218.6 237.1 #> 70 73 232.9 247.9 #> 71 74 235.0 227.0 #> 72 75 237.8 235.3 #> 73 76 246.1 283.0 #> 74 77 252.6 263.5 #> 75 78 254.9 283.5 #> 76 79 261.4 272.3 #> 77 80 262.4 256.6 #> 78 81 270.1 289.2 #> 79 82 271.3 265.7 #> 80 83 273.5 264.5 #> 81 84 274.2 262.2 #> 82 85 281.1 271.1 #> 83 86 297.0 311.7 #> 84 87 298.7 296.5 #> 85 88 326.7 310.2 #> 86 89 327.1 362.1 #> 87 90 329.6 368.5 #> 88 91 332.8 370.6 #> 89 92 337.4 379.5 #> 90 93 340.1 358.3 #> 91 94 364.8 390.6 #> 92 95 370.1 408.4 #> 93 96 390.6 371.0 #> 94 97 395.7 431.7 #> 95 98 419.3 438.7 #> 96 99 421.3 382.3 #> 97 100 426.3 441.8 #> 98 101 440.4 455.6 #> 99 102 443.4 465.8 #> 100 103 446.2 416.4 #> 101 104 462.7 480.3 #> 102 105 467.7 470.7 #> 103 106 507.4 496.7 #> 104 107 568.3 595.9 #> 105 108 599.6 611.0 #> 106 109 613.8 622.3 #> 107 110 633.5 641.3 #> 108 111 678.6 717.5 #> 109 112 687.6 714.9 #> 110 113 695.1 647.3 #> 111 114 701.0 725.6 #> 112 115 708.3 729.5 #> 113 116 735.6 754.5 #> 114 117 794.8 768.5 #> 115 118 937.0 901.6 #> 116 119 1031.9 1068.0 #> 117 120 1239.3 1279.0 #> # Using \"4E\" as the method input ba4 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) getOutlier(ba4, method = \"4E\") #> No outlier is detected."},{"path":"https://kaigu1990.github.io/mcradds/reference/glucose.html","id":null,"dir":"Reference","previous_headings":"","what":"Inermediate Precision Data — glucose","title":"Inermediate Precision Data — glucose","text":"data set consists Glucose intermediate precision data CLSI EP05-A3 guideline.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/glucose.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Inermediate Precision Data — glucose","text":"","code":"glucose"},{"path":"https://kaigu1990.github.io/mcradds/reference/glucose.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Inermediate Precision Data — glucose","text":"glucose data set contains 80 observations 3 variables. day day number run run number value measurement value","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/glucose.html","id":"source","dir":"Reference","previous_headings":"","what":"Source","title":"Inermediate Precision Data — glucose","text":"CLSI-EP05A3 Table A1. Glucose Precision Evaluation Measurements (mg/dL) cited data set.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/glucose.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Inermediate Precision Data — glucose","text":"EP05A3: Evaluation Precision Quantitative Measurement Procedures.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_difference.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute Difference for Bland-Altman — h_difference","title":"Compute Difference for Bland-Altman — h_difference","text":"Helper function computes difference specific type.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_difference.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute Difference for Bland-Altman — h_difference","text":"","code":"h_difference(x, y, type)"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_difference.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute Difference for Bland-Altman — h_difference","text":"x (numeric) reference method. y (numeric) test method. type (integer) integer specifying specific difference Bland-Altman (default 3). Possible choices : 1 - difference X vs. Y-X (absolute differences). 2 - difference X vs. (Y-X)/X (relative differences). 3 - difference 0.5*(X+Y) vs. Y-X (absolute differences). 4 - difference 0.5*(X+Y) vs. (Y-X)/X (relative differences). 5 - difference 0.5*(X+Y) vs. (Y-X)/(0.5*(X+Y)) (relative differences).","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_difference.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compute Difference for Bland-Altman — h_difference","text":"matrix contains x y measurement data corresponding difference.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_difference.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compute Difference for Bland-Altman — h_difference","text":"","code":"h_difference(x = c(1.1, 1.2, 1.5), y = c(1.2, 1.3, 1.4), type = 5) #> x y x_ba y_ba #> [1,] 1.1 1.2 1.15 0.08695652 #> [2,] 1.2 1.3 1.25 0.08000000 #> [3,] 1.5 1.4 1.45 -0.06896552"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_factor.html","id":null,"dir":"Reference","previous_headings":"","what":"Factor Variable Per Levels — h_factor","title":"Factor Variable Per Levels — h_factor","text":"Helper function factor inputs order appearance, per levels provide.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_factor.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Factor Variable Per Levels — h_factor","text":"","code":"h_factor(df, var, levels = NULL, ...)"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_factor.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Factor Variable Per Levels — h_factor","text":"df (data.frame) input data. var (string) variable factor. levels (vector) character vector known levels. ... arguments passed factor().","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_factor.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Factor Variable Per Levels — h_factor","text":"factor variable","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_factor.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Factor Variable Per Levels — h_factor","text":"","code":"df <- data.frame(a = c(\"aa\", \"a\", \"aa\")) h_factor(df, var = \"a\") #> [1] aa a aa #> Levels: a aa h_factor(df, var = \"a\", levels = c(\"aa\", \"a\")) #> [1] aa a aa #> Levels: aa a"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_est.html","id":null,"dir":"Reference","previous_headings":"","what":"Format and Concatenate to String — h_fmt_est","title":"Format and Concatenate to String — h_fmt_est","text":"Help function format numeric data strings concatenate single character.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_est.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Format and Concatenate to String — h_fmt_est","text":"","code":"h_fmt_est(num1, num2, digits = c(2, 2), width = c(6, 6))"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_est.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Format and Concatenate to String — h_fmt_est","text":"num1 (numeric) first numeric input. num2 (numeric) second numeric input. digits (integer) desired number digits decimal point. width (integer) total field width.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_est.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Format and Concatenate to String — h_fmt_est","text":"single character.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_est.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Format and Concatenate to String — h_fmt_est","text":"","code":"h_fmt_est(num1 = 3.14, num2 = 3.1415, width = c(4, 4)) #> [1] \"3.14 (3.14)\""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_num.html","id":null,"dir":"Reference","previous_headings":"","what":"Format Numeric Data — h_fmt_num","title":"Format Numeric Data — h_fmt_num","text":"Help function format numeric data formatC function.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_num.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Format Numeric Data — h_fmt_num","text":"","code":"h_fmt_num(x, digits, width = digits + 4)"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_num.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Format Numeric Data — h_fmt_num","text":"x (numeric) numeric input. digits (integer) desired number digits decimal point (format = \"f\"). width (integer) total field width.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_num.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Format Numeric Data — h_fmt_num","text":"character object specific digits width.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_num.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Format Numeric Data — h_fmt_num","text":"","code":"h_fmt_num(pi * 10^(-2:2), digits = 2, width = 6) #> [1] \" 0.03\" \" 0.31\" \" 3.14\" \" 31.42\" \"314.16\""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_range.html","id":null,"dir":"Reference","previous_headings":"","what":"Format and Concatenate to Range — h_fmt_range","title":"Format and Concatenate to Range — h_fmt_range","text":"Help function format numeric data strings concatenate single character range.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_range.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Format and Concatenate to Range — h_fmt_range","text":"","code":"h_fmt_range(num1, num2, digits = c(2, 2), width = c(6, 6))"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_range.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Format and Concatenate to Range — h_fmt_range","text":"num1 (numeric) first numeric input. num2 (numeric) second numeric input. digits (integer) desired number digits decimal point. width (integer) total field width.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_range.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Format and Concatenate to Range — h_fmt_range","text":"single character.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/h_fmt_range.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Format and Concatenate to Range — h_fmt_range","text":"","code":"h_fmt_range(num1 = 3.14, num2 = 3.14, width = c(4, 4)) #> [1] \"(3.14, 3.14)\""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_summarize.html","id":null,"dir":"Reference","previous_headings":"","what":"Summarize Basic Statistics — h_summarize","title":"Summarize Basic Statistics — h_summarize","text":"Help function summarizes statistics needed.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_summarize.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summarize Basic Statistics — h_summarize","text":"","code":"h_summarize(x, conf.level = 0.95)"},{"path":"https://kaigu1990.github.io/mcradds/reference/h_summarize.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summarize Basic Statistics — h_summarize","text":"x (numeric) input numeric vector. conf.level (numeric) significance level, default 0.95.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_summarize.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summarize Basic Statistics — h_summarize","text":"verctor contains several statistics, n, mean, median, min, max, q25, q75, sd, se, limit agreement limit confidence interval .","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/h_summarize.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Summarize Basic Statistics — h_summarize","text":"","code":"h_summarize(1:50) #> n mean median min max q1 q3 sd se limit_lr limit_ur #> [1,] 50 25.5 25.5 1 50 13.25 37.75 14.57738 2.061553 -3.071139 54.07114 #> ci_lr ci_ur #> [1,] 21.45943 29.54057"},{"path":"https://kaigu1990.github.io/mcradds/reference/ldlroc.html","id":null,"dir":"Reference","previous_headings":"","what":"Two-sampled Paired Test Data — ldlroc","title":"Two-sampled Paired Test Data — ldlroc","text":"data set consists measurements low-density lipoprotein (LDL), oxidized low-density lipoprotein (OxLDL) corresponding diagnosis. OxLDL thought active molecule process atherosclerosis, proponents believe serum concentration provide accurate risk stratification traditional LDL assay.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ldlroc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Two-sampled Paired Test Data — ldlroc","text":"","code":"ldlroc"},{"path":"https://kaigu1990.github.io/mcradds/reference/ldlroc.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Two-sampled Paired Test Data — ldlroc","text":"ldlroc data set contains 50 observations 3 variables. Diagnosis diagnosis, 1 represents subject disease condition interest present, 0 absent OxLDL oxidized low-density lipoprotein(OxLDL) measurement value LDL low-density lipoprotein(LDL) measurement value","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ldlroc.html","id":"source","dir":"Reference","previous_headings":"","what":"Source","title":"Two-sampled Paired Test Data — ldlroc","text":"CLSI-EP24A2 Table D1. OxLDL LDL Assay Values (U/L) 50 Subjects.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/ldlroc.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Two-sampled Paired Test Data — ldlroc","text":"EP24A2 Assessment Diagnostic Accuracy Laboratory Tests Using Receiver Operating Characteristic Curves.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/mcradds-package.html","id":null,"dir":"Reference","previous_headings":"","what":"mcradds Package — mcradds-package","title":"mcradds Package — mcradds-package","text":"mcradds Processing analyzing Vitro Diagnostic Data.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/mcradds-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"mcradds Package — mcradds-package","text":"Maintainer: Kai Gu gukai1212@163.com [copyright holder]","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRI.html","id":null,"dir":"Reference","previous_headings":"","what":"Nonparametric Method in Calculation of Reference Interval — nonparRI","title":"Nonparametric Method in Calculation of Reference Interval — nonparRI","text":"nonparametric method used calculate reference interval distribution skewed sample size 120 observations.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRI.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Nonparametric Method in Calculation of Reference Interval — nonparRI","text":"","code":"nonparRI(x, ind = 1:length(x), conf.level = 0.95)"},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRI.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Nonparametric Method in Calculation of Reference Interval — nonparRI","text":"x (numeric) numeric measurements target population. ind (integer) integer vector boot process, default elements x. conf.level (numeric) percentile reference limit.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRI.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Nonparametric Method in Calculation of Reference Interval — nonparRI","text":"vector nonparametric reference interval","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRI.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Nonparametric Method in Calculation of Reference Interval — nonparRI","text":"","code":"data(\"calcium\") x <- calcium$Value nonparRI(x) #> 2.5% 97.5% #> 9.1 10.3"},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRanks.html","id":null,"dir":"Reference","previous_headings":"","what":"Nonparametric Rank Number of Reference Interval — nonparRanks","title":"Nonparametric Rank Number of Reference Interval — nonparRanks","text":"data shows rank number computing confidence interval nonparametric reference limit samples within 119-1000 values. reference interval must 95% confidence interval 90%.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRanks.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Nonparametric Rank Number of Reference Interval — nonparRanks","text":"","code":"nonparRanks"},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRanks.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Nonparametric Rank Number of Reference Interval — nonparRanks","text":"nonparRanks data set contains 882 observations 3 variables. SampleSize sample size Lower lower rank Upper upper rank","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRanks.html","id":"source","dir":"Reference","previous_headings":"","what":"Source","title":"Nonparametric Rank Number of Reference Interval — nonparRanks","text":"CLSI-EP28A3 Table 8. cited data set.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/nonparRanks.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Nonparametric Rank Number of Reference Interval — nonparRanks","text":"EP28-A3c: Defining, Establishing, Verifying Reference Intervals Clinical Laboratory.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":null,"dir":"Reference","previous_headings":"","what":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"Adjust cor.test function can define specific H0 per request, based Fisher's Z transformation correlation.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"","code":"pearsonTest( x, y, h0 = 0, conf.level = 0.95, alternative = c(\"two.sided\", \"less\", \"greater\"), ... )"},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"x (numeric) one measurement. y (numeric) another measurement. h0 (numeric) specified hypothesized value difference two correlations, default 0. conf.level (numeric) significance level returned confidence interval hypothesis. alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". ... arguments passed cor.test().","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"named vector contains correlation coefficient (cor), confidence interval(lowerci upperci), Z statistic (Z) p-value (pval)","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"NCSS correlation document","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/pearsonTest.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Hypothesis Test for Pearson Correlation Coefficient — pearsonTest","text":"","code":"x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) pearsonTest(x, y, h0 = 0.5, alternative = \"greater\") #> $stat #> cor lowerci upperci Z pval #> 0.5711816 -0.1497426 0.8955795 0.2448722 0.4032777 #> #> $method #> [1] \"Pearson's correlation\" #> #> $conf.level #> [1] 0.95 #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/pipe.html","id":null,"dir":"Reference","previous_headings":"","what":"Pipe operator — %>%","title":"Pipe operator — %>%","text":"See magrittr::%>% details.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pipe.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Pipe operator — %>%","text":"","code":"lhs %>% rhs"},{"path":"https://kaigu1990.github.io/mcradds/reference/pipe.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Pipe operator — %>%","text":"lhs value magrittr placeholder. rhs function call using magrittr semantics.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/pipe.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Pipe operator — %>%","text":"result calling rhs(lhs).","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/platelet.html","id":null,"dir":"Reference","previous_headings":"","what":"Quantitative Measurement Data — platelet","title":"Quantitative Measurement Data — platelet","text":"example platelet can used create data set comparing Platelet results two analyzers cells.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/platelet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Quantitative Measurement Data — platelet","text":"","code":"platelet"},{"path":"https://kaigu1990.github.io/mcradds/reference/platelet.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Quantitative Measurement Data — platelet","text":"platelet data set contains 120 observations 3 variables. Sample Sample id Comparative Measurements comparative analyzer Candidate Measurements candidate analyzer","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/platelet.html","id":"source","dir":"Reference","previous_headings":"","what":"Source","title":"Quantitative Measurement Data — platelet","text":"CLSI-EP09 A3 Appendix H, Table H2 cited data set.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/qualData.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulated Qualitative Data — qualData","title":"Simulated Qualitative Data — qualData","text":"simulated data qualData can used calculate qualitative performance sensitivity specificity.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/qualData.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulated Qualitative Data — qualData","text":"","code":"qualData"},{"path":"https://kaigu1990.github.io/mcradds/reference/qualData.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Simulated Qualitative Data — qualData","text":"qualData data set contains 200 observations 3 variables. Sample Sample id ComparativeN Measurements comparative analyzer 1=positive 0=negative CandidateN Measurements candidate analyzer 1=positive 0=negative","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"function used establish reference interval target population parametric, non-parametric robust methods follows CLSI-EP28A3 NMPA guideline. additional, also provides corresponding confidence interval lower/upper reference limit needed. Given outliers identified beforehand, Tukey Dixon methods can applied depending distribution data.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"","code":"refInterval( x, out_method = c(\"doxin\", \"tukey\"), out_rm = FALSE, RI_method = c(\"parametric\", \"nonparametric\", \"robust\"), CI_method = c(\"parametric\", \"nonparametric\", \"boot\"), refLevel = 0.95, bootCI = c(\"perc\", \"norm\", \"basic\", \"stud\", \"bca\"), confLevel = 0.9, rng.seed = NULL, tol = 1e-06, R = 10000 )"},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"x (numeric) numeric measurements target population. out_method (string) string specifying outlier detection use. out_rm (logical) whether outliers removed . RI_method (string) string specifying method computing reference interval use. Default parametric, options can nonparametric robust. CI_method (string) string specifying method computing confidence interval reference limit(lower upper) use. Default parametric, options can nonparametric boot. refLevel (numeric) reference range/interval, usual 0.95. bootCI (string) string specifying bootstrap confidence interval boot.ci() function boot package. Default perc(bootstrap percentile), options can norm(normal approximation), boot(basic bootstrap), stud(studentized bootstrap) bca(adjusted bootstrap percentile). confLevel (numeric) significance level confidence interval reference limit. rng.seed (integer) number random number generator seed bootstrap sampling. set NULL currently R session used RNG setting used. tol (numeric) tolerance iterative process can stopped robust method. R (integer) number bootstrap replicates, used boot() function.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"RefInt object contains relevant results establishing reference interval.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"conditions use aware : parametric method used calculate reference interval, confidence interval method well. non-parametric method used calculate reference interval sample size 120 observations, non-parametric suggested confidence interval. Otherwise sample size 120, bootstrap method better choice. Beside non-parametric method confidence interval allows refLevel=0.95 confLevel=0.9 arguments, bootstrap methods used automatically. robust method used calculate reference interval, method confidence interval must bootstrap.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/refInterval.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculate Reference Interval and Corresponding Confidence Interval — refInterval","text":"","code":"data(\"calcium\") x <- calcium$Value refInterval(x, RI_method = \"parametric\", CI_method = \"parametric\") #> #> Reference Interval Method: parametric, Confidence Interval Method: parametric #> #> Call: refInterval(x = x, RI_method = \"parametric\", CI_method = \"parametric\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.05, 10.32 #> RefLower Confidence Interval: 8.9926, 9.1100 #> Refupper Confidence Interval: 10.2584, 10.3757 refInterval(x, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> Reference Interval Method: nonparametric, Confidence Interval Method: nonparametric #> #> Call: refInterval(x = x, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.10, 10.30 #> RefLower Confidence Interval: 8.9000, 9.2000 #> Refupper Confidence Interval: 10.3000, 10.4000 refInterval(x, RI_method = \"robust\", CI_method = \"boot\", R = 1000) #> [1] \"Bootstrape process could take a short while.\" #> #> Reference Interval Method: robust, Confidence Interval Method: boot #> #> Call: refInterval(x = x, RI_method = \"robust\", CI_method = \"boot\", #> R = 1000) #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.04, 10.32 #> RefLower Confidence Interval: 8.9777, 9.0979 #> Refupper Confidence Interval: 10.2568, 10.3751"},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":null,"dir":"Reference","previous_headings":"","what":"Robust Method in Calculation of Reference Interval — robustRI","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"robust method used calculate reference interval small sample size (120 observations).","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"","code":"robustRI(x, ind = 1:length(x), conf.level = 0.95, tol = 1e-06)"},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"x (numeric) numeric measurements target population. ind (integer) integer vector boot process, default elements x. conf.level (numeric) significance level internal t statistic. tol (numeric) tolerance iterative process can stopped.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"vector robust reference interval","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"robust algorithm referring CLSI document EP28A3.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/robustRI.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Robust Method in Calculation of Reference Interval — robustRI","text":"","code":"# This example data is taken from EP28A3 Appendix B. to ensure the result is in accordance. x <- c(8.9, 9.2, rep(9.4, 2), rep(9.5, 3), rep(9.6, 4), rep(9.7, 5), 9.8, rep(9.9, 2), 10.2) robustRI(x) #> [1] 9.049545 10.199396"},{"path":"https://kaigu1990.github.io/mcradds/reference/samplesize-class.html","id":null,"dir":"Reference","previous_headings":"","what":"SampleSize Class — SampleSize-class","title":"SampleSize Class — SampleSize-class","text":"SampleSize class serves store results parameters sample size calculation.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/samplesize-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"SampleSize Class — SampleSize-class","text":"","code":"SampleSize(call, method, n, param)"},{"path":"https://kaigu1990.github.io/mcradds/reference/samplesize-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"SampleSize Class — SampleSize-class","text":"call (call) function call. method (character) method name. n (numeric) number sample size. param (list) list relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/samplesize-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"SampleSize Class — SampleSize-class","text":"object class SampleSize.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/samplesize-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"SampleSize Class — SampleSize-class","text":"call call method method n n param param","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/show.html","id":null,"dir":"Reference","previous_headings":"","what":"Show Method for Objects — show,SampleSize-method","title":"Show Method for Objects — show,SampleSize-method","text":"show method displays essential information objects.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/show.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Show Method for Objects — show,SampleSize-method","text":"","code":"# S4 method for SampleSize show(object) # S4 method for MCTab show(object) # S4 method for BAsummary show(object) # S4 method for RefInt show(object) # S4 method for tpROC show(object)"},{"path":"https://kaigu1990.github.io/mcradds/reference/show.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Show Method for Objects — show,SampleSize-method","text":"object () input.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/show.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Show Method for Objects — show,SampleSize-method","text":"None (invisible NULL), used side effect printing console.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/show.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Show Method for Objects — show,SampleSize-method","text":"","code":"# Sample zie calculation size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8) #> #> Sample size determination for one Proportion #> #> Call: size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8) #> #> optimal sample size: n = 239 #> #> p1:0.95 p0:0.9 alpha:0.05 power:0.8 alternative:two.sided size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> Sample size determination for a Given Lower Confidence Interval of Pearson's Correlation #> #> Call: size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> optimal sample size: n = 86 #> #> r:0.9 lr:0.85 alpha:0.025 interval:c(10, 1e+05) tol:1e-05 alternative:greater # Get 2x2 Contingency Table qualData %>% diagTab(formula = ~ CandidateN + ComparativeN) #> Contingency Table: #> #> levels: 0 1 #> ComparativeN #> CandidateN 0 1 #> 0 54 16 #> 1 8 122 # Bland-Altman analysis data(\"platelet\") blandAltman(x = platelet$Comparative, y = platelet$Candidate) #> Call: blandAltman(x = platelet$Comparative, y = platelet$Candidate) #> #> Absolute difference type: Y-X #> Relative difference type: (Y-X)/(0.5*(X+Y)) #> #> Absolute.difference Relative.difference #> N 120 120 #> Mean (SD) 7.330 (15.990) 0.064 ( 0.145) #> Median 6.350 0.055 #> Q1, Q3 ( 0.150, 15.750) ( 0.001, 0.118) #> Min, Max (-47.800, 42.100) (-0.412, 0.667) #> Limit of Agreement (-24.011, 38.671) (-0.220, 0.347) #> Confidence Interval of Mean ( 4.469, 10.191) ( 0.038, 0.089) # Reference Interval data(\"calcium\") refInterval(x = calcium$Value, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> Reference Interval Method: nonparametric, Confidence Interval Method: nonparametric #> #> Call: refInterval(x = calcium$Value, RI_method = \"nonparametric\", CI_method = \"nonparametric\") #> #> N = 240 #> Outliers: NULL #> Reference Interval: 9.10, 10.30 #> RefLower Confidence Interval: 8.9000, 9.2000 #> Refupper Confidence Interval: 10.3000, 10.4000 # Comparing the Paired ROC when Non-inferiority margin <= -0.1 data(\"ldlroc\") aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = \"non-inferiority\", h0 = -0.1 ) #> Setting levels: control = 0, case = 1 #> Setting direction: controls < cases #> #> The hypothesis for testing non-inferiority based on Paired ROC curve #> #> Test assay: #> Area under the curve: 0.7995 #> Standard Error(SE): 0.0620 #> 95% Confidence Interval(CI): 0.6781-0.9210 (DeLong) #> #> Reference/standard assay: #> Area under the curve: 0.5617 #> Standard Error(SE): 0.0836 #> 95% Confidence Interval(CI): 0.3979-0.7255 (DeLong) #> #> Comparison of Paired AUC: #> Alternative hypothesis: the difference in AUC is non-inferiority to -0.1 #> Difference of AUC: 0.2378 #> Standard Error(SE): 0.0790 #> 95% Confidence Interval(CI): 0.0829-0.3927 (standardized differenec method) #> Z: 4.2739 #> Pvalue: 9.606e-06"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"function performs sample size computation testing Pearson's correlation lower confidence interval provided.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"","code":"size_ci_corr( r, lr, alpha = 0.05, interval = c(10, 1e+05), tol = 1e-05, alternative = c(\"two.sided\", \"less\", \"greater\") )"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"r (numeric) expected correlation coefficient evaluated assay. lr (numeric) acceptable correlation coefficient evaluated assay. alpha (numeric) type--risk, \\(\\alpha\\). interval (numeric) numeric vector containing end-points interval searched root(sample size). defaults set c(1, 100000). tol (numeric) tolerance searching root(sample size). alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\".","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"object size class contains sample size relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"Fisher (1973, p. 199).","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_corr.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sample Size for Testing Confidence Interval of Pearson's correlation — size_ci_corr","text":"","code":"size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> Sample size determination for a Given Lower Confidence Interval of Pearson's Correlation #> #> Call: size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = \"greater\") #> #> optimal sample size: n = 86 #> #> r:0.9 lr:0.85 alpha:0.025 interval:c(10, 1e+05) tol:1e-05 alternative:greater"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"function performs sample size computation testing given lower confidence interval one proportion using Simple Asymptotic(Wald), Wilson score, clopper-pearson methods.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"","code":"size_ci_one_prop( p, lr, alpha = 0.05, interval = c(1, 1e+05), tol = 1e-05, alternative = c(\"two.sided\", \"less\", \"greater\"), method = c(\"simple-asymptotic\", \"wilson\", \"wald\", \"clopper-pearson\") )"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"p (numeric) expected criteria evaluated assay. lr (numeric) acceptable criteria evaluated assay. alpha (numeric) type--risk, \\(\\alpha\\). interval (numeric) numeric vector containing end-points interval searched root(sample size). defaults set c(1, 100000). tol (numeric) tolerance searching root(sample size). alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". method (string) string specifying method use. Simple Asymptotic default, equal Wald. Options can \"wilson\", \"clopper-pearson\" method, see DescTools::BinomCIn","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"object size class contains sample size relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"Newcombe, R. G. 1998. 'Two-Sided Confidence Intervals Single Proportion: Comparison Seven Methods.' Statistics Medicine, 17, pp. 857-872.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/size_ci_one_prop.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sample Size for Testing Confidence Interval of One Proportion — size_ci_one_prop","text":"","code":"size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wilson\") #> #> Sample size determination for a Given Lower Confidence Interval #> #> Call: size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wilson\") #> #> optimal sample size: n = 246 #> #> p:0.85 lr:0.8 alpha:0.05 interval:c(1, 1e+05) tol:1e-05 alternative:two.sided method:wilson size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"simple-asymptotic\") #> #> Sample size determination for a Given Lower Confidence Interval #> #> Call: size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"simple-asymptotic\") #> #> optimal sample size: n = 196 #> #> p:0.85 lr:0.8 alpha:0.05 interval:c(1, 1e+05) tol:1e-05 alternative:two.sided method:simple-asymptotic size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wald\") #> #> Sample size determination for a Given Lower Confidence Interval #> #> Call: size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = \"wald\") #> #> optimal sample size: n = 196 #> #> p:0.85 lr:0.8 alpha:0.05 interval:c(1, 1e+05) tol:1e-05 alternative:two.sided method:wald"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample Size for Testing Pearson's correlation — size_corr","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"function performs sample size computation testing Pearson's correlation, using uses Fisher's classic z-transformation normalize distribution Pearson's correlation coefficient.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"","code":"size_corr( r1, r0, alpha = 0.05, power = 0.8, alternative = c(\"two.sided\", \"less\", \"greater\") )"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"r1 (numeric) expected correlation coefficient evaluated assay. r0 (numeric) acceptable correlation coefficient evaluated assay. alpha (numeric) type--risk, \\(\\alpha\\). power (numeric) Power test, equal 1 minus type-II-risk (\\(\\beta\\)). alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\".","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"object size class contains sample size relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"Fisher (1973, p. 199).","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/size_corr.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sample Size for Testing Pearson's correlation — size_corr","text":"","code":"size_corr(r1 = 0.95, r0 = 0.9, alpha = 0.025, power = 0.8, alternative = \"greater\") #> #> Sample size determination for testing Pearson's Correlation #> #> Call: size_corr(r1 = 0.95, r0 = 0.9, alpha = 0.025, power = 0.8, alternative = \"greater\") #> #> optimal sample size: n = 64 #> #> r1:0.95 r0:0.9 alpha:0.025 power:0.8 alternative:greater"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":null,"dir":"Reference","previous_headings":"","what":"Sample Size for Testing One Proportion — size_one_prop","title":"Sample Size for Testing One Proportion — size_one_prop","text":"function performs sample size computation testing one proportion accordance Chinese NMPA's IVD guideline.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sample Size for Testing One Proportion — size_one_prop","text":"","code":"size_one_prop( p1, p0, alpha = 0.05, power = 0.8, alternative = c(\"two.sided\", \"less\", \"greater\") )"},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sample Size for Testing One Proportion — size_one_prop","text":"p1 (numeric) expected criteria evaluated assay. p0 (numeric) acceptable criteria evaluated assay. alpha (numeric) type--risk, \\(\\alpha\\). power (numeric) Power test, equal 1 minus type-II-risk (\\(\\beta\\)). alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\".","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Sample Size for Testing One Proportion — size_one_prop","text":"object size class contains sample size relevant parameters.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Sample Size for Testing One Proportion — size_one_prop","text":"Chinese NMPA's IVD technical guideline.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/size_one_prop.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sample Size for Testing One Proportion — size_one_prop","text":"","code":"size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8) #> #> Sample size determination for one Proportion #> #> Call: size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8) #> #> optimal sample size: n = 239 #> #> p1:0.95 p0:0.9 alpha:0.05 power:0.8 alternative:two.sided"},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":null,"dir":"Reference","previous_headings":"","what":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"Providing confidence interval Spearman's rank correlation Bootstrap, define specific H0 per request, based Fisher's Z transformation correlation variance recommended Bonett Wright (2000), Pearson's.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"","code":"spearmanTest( x, y, h0 = 0, conf.level = 0.95, alternative = c(\"two.sided\", \"less\", \"greater\"), nrep = 1000, rng.seed = NULL, ... )"},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"x (numeric) one measurement. y (numeric) another measurement. h0 (numeric) specified hypothesized value difference two correlations, default 0. conf.level (numeric) significance level returned confidence interval hypothesis. alternative (string) string specifying alternative hypothesis, must one \"two.sided\" (default), \"greater\" \"less\". nrep (integer) number replicates bootstrapping, default 1000. rng.seed (integer) number random number generator seed bootstrap sampling. set NULL currently R session used RNG setting used. ... arguments passed cor.test().","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"named vector contains correlation coefficient (cor), confidence interval(lowerci upperci), Z statistic (Z) p-value (pval)","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"NCSS correlation document","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/reference/spearmanTest.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Hypothesis Test for Spearman Correlation Coefficient — spearmanTest","text":"","code":"x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) spearmanTest(x, y, h0 = 0.5, alternative = \"greater\") #> $stat #> cor lowerci upperci Z pval #> 0.6000000 -0.1581140 0.9765538 0.3243526 0.3728355 #> #> $method #> [1] \"Spearman's correlation\" #> #> $conf.level #> [1] 0.95 #>"},{"path":"https://kaigu1990.github.io/mcradds/reference/tpROC-class.html","id":null,"dir":"Reference","previous_headings":"","what":"Test for Paired ROC Class — tpROC-class","title":"Test for Paired ROC Class — tpROC-class","text":"tpROC class serves store results testing AUC paired two-sample assays.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tpROC-class.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Test for Paired ROC Class — tpROC-class","text":"","code":"tpROC(testROC, refROC, method, H0, stat)"},{"path":"https://kaigu1990.github.io/mcradds/reference/tpROC-class.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Test for Paired ROC Class — tpROC-class","text":"testROC (list) object pRPC::roc() function test assay. refROC (list) object pRPC::roc() function reference/standard assay. method (character) method hypothesis test. H0 (numeric) margin test. stat (list) list contains difference comparing results, difference AUC, standard error, confidence interval, Z statistic P value.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tpROC-class.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Test for Paired ROC Class — tpROC-class","text":"object class tpROC.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tpROC-class.html","id":"slots","dir":"Reference","previous_headings":"","what":"Slots","title":"Test for Paired ROC Class — tpROC-class","text":"testROC testROC refROC refROC method method stat stat","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tukey_outlier.html","id":null,"dir":"Reference","previous_headings":"","what":"Detect Tukey Outlier — tukey_outlier","title":"Detect Tukey Outlier — tukey_outlier","text":"Help function detects potential outlier Tukey method number Q1-1.5*IQR Q3+1.5*IQR.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tukey_outlier.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Detect Tukey Outlier — tukey_outlier","text":"","code":"tukey_outlier(x)"},{"path":"https://kaigu1990.github.io/mcradds/reference/tukey_outlier.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Detect Tukey Outlier — tukey_outlier","text":"x (numeric) numeric input","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tukey_outlier.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Detect Tukey Outlier — tukey_outlier","text":"list contains outliers vector without outliers.","code":""},{"path":"https://kaigu1990.github.io/mcradds/reference/tukey_outlier.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Detect Tukey Outlier — tukey_outlier","text":"","code":"x <- c(13.6, 44.4, 45.9, 14.9, 41.9, 53.3, 44.7, 95.2, 44.1, 50.7, 45.2, 60.1, 89.1) tukey_outlier(x) #> $ord #> [1] 1 4 8 13 #> #> $out #> [1] 13.6 14.9 95.2 89.1 #> #> $subset #> [1] 44.4 45.9 41.9 53.3 44.7 44.1 50.7 45.2 60.1 #>"},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"mcradds-101","dir":"Changelog","previous_headings":"","what":"mcradds 1.0.1","title":"mcradds 1.0.1","text":"CRAN release: 2023-10-11","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"meta-1-0-1","dir":"Changelog","previous_headings":"","what":"Meta","title":"mcradds 1.0.1","text":"Remove mcr package related codes ’s available CRAN.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"meta-1-0-0","dir":"Changelog","previous_headings":"","what":"Meta","title":"mcradds 1.0.0","text":"First public release mcradds package. Submission CRAN.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"new-features-1-0-0","dir":"Changelog","previous_headings":"","what":"New features","title":"mcradds 1.0.0","text":"Added autoplot method Bland-Altman regression plots.","code":""},{"path":[]},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"new-features-0-2-0","dir":"Changelog","previous_headings":"","what":"New features","title":"mcradds 0.2.0","text":"Added tukey_outlier dixon_outlier detect outliers ahead establishing reference range. Added robustRI nonparRI compute robust non-parametric reference range, integrated main program refInterval. Wrapped anovaVCA VCAinference VCA package analyze variance components ANOVA model. Added aucTest AUC test paired two-sample measurements designs difference, non-inferiority superiority. Added RefInt tpROC classes corresponding show method. Added calcium, glucose, ldlroc PDL1RP data sets example testing use, nonparRanks data set internal function use.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"enhancements-0-2-0","dir":"Changelog","previous_headings":"","what":"Enhancements","title":"mcradds 0.2.0","text":"Enhanced diagTab getAccuracy can support reader precision ananlysis qualitative performance.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"miscellaneous-0-2-0","dir":"Changelog","previous_headings":"","what":"Miscellaneous","title":"mcradds 0.2.0","text":"Added series helper function format concatenate string. Uniform capital lower-case letters roxygen documents.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"mcradds-010","dir":"Changelog","previous_headings":"","what":"mcradds 0.1.0","title":"mcradds 0.1.0","text":"First release mcradds package, contains basic quantitative qualitative performance methods functions shown .","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"sample-size-0-1-0","dir":"Changelog","previous_headings":"","what":"Sample Size","title":"mcradds 0.1.0","text":"Added size_one_prop size_ci_one_prop sample size qualitative trials, size_corr size_ci_corr quantitative trails.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"classes-and-datasets-0-1-0","dir":"Changelog","previous_headings":"","what":"Classes and Datasets","title":"mcradds 0.1.0","text":"Added SampleSize, MCTab BAsummary classes show method. Added platelet qualData data sets example testing use.","code":""},{"path":"https://kaigu1990.github.io/mcradds/news/index.html","id":"analyzing-functions-and-methods-0-1-0","dir":"Changelog","previous_headings":"","what":"Analyzing Functions and Methods","title":"mcradds 0.1.0","text":"Added diagTab function get 2x2 contingency table, getAccuracy method compute qualitative diagnostic accuracy criteria. Added blandAltman function calculate statistics Bland-Altman, getOutlier method detect potential outliers. Added pearsonTest spearmanTest, efficient functions compute confidence interval hypothesis test. Added mcreg calcBias methods mcr package wrapped regression analysis.","code":""}]