Skip to content

Classifier variability assessment; Compare Two Correlated C Indices with Right-censored Survival Outcome

Notifications You must be signed in to change notification settings

DIDSR/ClassifierAssessment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 

Repository files navigation

Assessment of Classifiers

Project 2: Compare Two Correlated C Indices with Right-censored Survival Outcome

Kang L, Chen W, Petrick NA and Gallas BD (2015). “Comparing two correlated C indices with right-censored survival outcome: a one-shot nonparametric approach.” Statistics in Medicine, 34(4), pp. 685-703. http://dx.doi.org/10.1002/sim.6370.

The R package for our paper above can be downloaded here (it includes source code as well as binary packages for various of platforms): http://cran.r-project.org/web/packages/compareC/index.html

Project Description

Proposed by Harrell, the C index or concordance C, is considered an overall measure of discrimination in survival analysis between a survival outcome that is possibly right censored and a predictive-score variable, which can represent a measured biomarker or a composite-score output from an algorithm that combines multiple biomarkers. This package aims to statistically compare two C indices with right-censored survival outcome, which commonly arise from a paired design and thus resulting two correlated C indices.

Project 1: Classifier variability: accounting for training and testing

W. Chen, B. D. Gallas, W. A. Yousef, "Classifier variability: accounting for training and testing" Pattern Recognition, 45:2661-2671 (2012)

Project Description

This project aims to develop statistical methodology and tools for the assessment of pattern classifiers. Specifically, superivised binary pattern classifiers are trained and tested with finite datasets. We assess the performance of the classifier in terms of the area under the receiver operating characteristic (ROC) curve (AUC). We develop analytical methods to quantify the variability of the estimated AUC that accounts for randomness from both the finite training set and the finite test set.

Disclaimer

This software and documentation (the "Software") were developed at the Food and Drug Administration (FDA) by employees of the Federal Government in the course of their official duties. Pursuant to Title 17, Section 105 of the United States Code, this work is not subject to copyright protection and is in the public domain. Permission is hereby granted, free of charge, to any person obtaining a copy of the Software, to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, or sell copies of the Software or derivatives, and to permit persons to whom the Software is furnished to do so. FDA assumes no responsibility whatsoever for use by other parties of the Software, its source code, documentation or compiled executables, and makes no guarantees, expressed or implied, about its quality, reliability, or any other characteristic. Further, use of this code in no way implies endorsement by the FDA or confers any advantage in regulatory decisions. Although this software can be redistributed and/or modified freely, we ask that any derivative works bear some notice that they are derived from it, and any modified versions bear some notice that they have been modified.

About

Classifier variability assessment; Compare Two Correlated C Indices with Right-censored Survival Outcome

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages