- Surprise Adequacies
- Distance-based Surprise Adequacy (DSA)
- Likelihood-based Surprise Adequacy (LSA)
- MultiModal-Likelihood-based Surprise Adequacy (MLSA)
- Mahalanobis-based Surprise Adequacy (MDSA)
- abstract MultiModal Surprise Adequacy
- Surprise Coverage
- Neuron-Activation Coverage (NAC)
- K-Multisection Neuron Coverage (KMNC)
- Neuron Boundary Coverage (NBC)
- Strong Neuron Activation Coverage (SNAC)
- Top-k Neuron Coverage (TKNC)
- Utilities
- APFD calculation
- Coverage-Added and Coverage-Total Prioritization Methods (CAM and CTM)
If you are looking for the uncertainty metrics we also tested (including DeepGini), head over to the sister repository uncertainty-wizard.
If you want to reproduce our exact experiments, there's a reproduction package and docker stuff available at testingautomated-usi/simple-tip.
It's as easy as pip install dnn-tip
.
Find the documentation at https://testingautomated-usi.github.io/dnn-tip/.
Here's the reference to the paper as part of which this library was release:
@inproceedings{10.1145/3533767.3534375,
author = {Weiss, Michael and Tonella, Paolo},
title = {Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning (Replicability Study)},
year = {2022},
isbn = {9781450393799},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3533767.3534375},
doi = {10.1145/3533767.3534375},
booktitle = {Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis},
pages = {139–150},
numpages = {12},
keywords = {neural networks, Test prioritization, uncertainty quantification},
location = {Virtual, South Korea},
series = {ISSTA 2022}
}