From 316b758338e44df510a1816eb27fed6e8db7afa1 Mon Sep 17 00:00:00 2001 From: TrisCC Date: Wed, 26 Jun 2024 14:25:15 +0200 Subject: [PATCH] Improve documentation --- docs/contributing.md | 47 ++++++++++++++++++ docs/{how-to-guides.md => examples.md} | 68 ++++++++++++++++---------- docs/getting-started.md | 59 ++++++++++++++++++++++ docs/index.md | 54 +++++--------------- mkdocs.yml | 6 ++- 5 files changed, 163 insertions(+), 71 deletions(-) create mode 100644 docs/contributing.md rename docs/{how-to-guides.md => examples.md} (89%) create mode 100644 docs/getting-started.md diff --git a/docs/contributing.md b/docs/contributing.md new file mode 100644 index 0000000..0d84df7 --- /dev/null +++ b/docs/contributing.md @@ -0,0 +1,47 @@ +# Contriburing + +Auto-Verify is an open source project that accepts contributions. You can find open issues [here](https://github.com/ADA-research/auto-verify/issues). + +## Setting up the development environment + +### Forking the repository + +To make changes to the code, you will need to fork the [repository](https://github.com/ADA-research/auto-verify). + +### Setting up the conda environment + +To use set up your development environment for Auto-Verify, you will need to install [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/). Other environment managers will not work. + +After installing Miniconde, you will need to create a conda environment. For example, you can use the commands below to create an environment for Auto-Verify. + +``` +> conda create -n auto-verify python=3.10 +> conda activate auto-verify +``` + +### Setting up Auto-Verify for development + +To install your local package of Auto-Verify, you will need to install an editable version. You can do this using the following command in your forked repository: + +``` +> pip install -e . +``` + +After doing this, you can run scripts locally and make your changes to the code. + +!!! warning + + You will still need to install the different verification algorithms with the CLI tool to use them. + +### Running tests with Tox + +To ensure that all the original code is working and everything is neat and tidy, you should run Tox to run the tests, linter and integration tests. To do this, you will need to install pytest and tox before you can run it. You can do this with the following commands. + +``` +> pip install tox +> tox +``` + +## Submitting your code + +After you have made modifications to your code, open a pull request [here](https://github.com/ADA-research/auto-verify/pulls). If a maintainer approves of the changes and the pipeline succeeds, the pull request will be merged into the main repository. diff --git a/docs/how-to-guides.md b/docs/examples.md similarity index 89% rename from docs/how-to-guides.md rename to docs/examples.md index 78fc6fb..6174e87 100644 --- a/docs/how-to-guides.md +++ b/docs/examples.md @@ -1,4 +1,43 @@ -# How-To-Guides +# Examples + +In this section there are a couple examples of how you can use Auto-Verify. + +## Using Datasets + +If you have your own models to use with Auto-Verify, you can use those. However, if you just want to try it out, we recommend using the datasets that are provided for the VNNCOMP competition. Auto-Verify supports reading benchmarks defined in VNNCOMP style, which are benchmarks with the following structure: + +``` +vnncomp2022 +└── test_props + ├── instances.csv + ├── onnx + │   ├── test_nano.onnx + │   ├── test_sat.onnx + │   └── test_unsat.onnx + └── vnnlib + ├── test_nano.vnnlib + └── test_prop.vnnlib +``` + +Where `instances.csv` is a `csv` file with 3 columns: network, property, timeout. For example, the `test_props` directory contains the following 3 verification instaces: + +``` +onnx/test_sat.onnx,vnnlib/test_prop.vnnlib,60 +onnx/test_unsat.onnx,vnnlib/test_prop.vnnlib,60 +onnx/test_nano.onnx,vnnlib/test_nano.vnnlib,60 +``` + +You can find the datasets from the previous years here: + +- VNNCOMP 2023: https://github.com/stanleybak/vnncomp2023 + +- VNNCOMP 2022: https://github.com/stanleybak/vnncomp2022 + +- VNNCOMP 2021: https://github.com/stanleybak/vnncomp2021 + +- VNNCOMP 2020: https://github.com/verivital/vnn-comp + +After downloading one of these datasets, you need to unzip the data files that you need or use the scripts provided in the repositories to prepare the files. ## Verifying Properties @@ -29,32 +68,7 @@ if __name__ == "__main__": ### Running VNNCOMP Benchmarks -Auto-Verify supports reading benchmarks defined in VNNCOMP style, which are benchmarks with the following structure: - -``` -vnncomp2022 -└── test_props - ├── instances.csv - ├── onnx - │   ├── test_nano.onnx - │   ├── test_sat.onnx - │   └── test_unsat.onnx - └── vnnlib - ├── test_nano.vnnlib - └── test_prop.vnnlib -``` - -Where `instances.csv` is a `csv` file with 3 columns: network, property, timeout. For example, the `test_props` directory contains the following 3 verification instaces: - -``` -onnx/test_sat.onnx,vnnlib/test_prop.vnnlib,60 -onnx/test_unsat.onnx,vnnlib/test_prop.vnnlib,60 -onnx/test_nano.onnx,vnnlib/test_nano.vnnlib,60 -``` - -VNNCOMP Benchmarks can found at the following links: [2022](https://github.com/ChristopherBrix/vnncomp2022_benchmarks/tree/main/benchmarks), [2023](https://github.com/ChristopherBrix/vnncomp2023_benchmarks/tree/main/benchmarks). Make sure to unzip all files inside the benchmark after you have downloaded it. - -Below is a code snippet that runs this benchmark. Note the `Path` pointing to the benchmark location. +Below is a code snippet that runs the VNNCOMP test_props benchmark. Note the `Path` pointing to the benchmark location. ```py from pathlib import Path diff --git a/docs/getting-started.md b/docs/getting-started.md new file mode 100644 index 0000000..1b553a6 --- /dev/null +++ b/docs/getting-started.md @@ -0,0 +1,59 @@ +### System requirements + +Auto-Verify has only been tested for Linux and will not work on MacOS and Windows. Alternatively, using [Windows Subsystem for Linux](https://learn.microsoft.com/nl-nl/windows/wsl/about) is also an option. + +Additionally, if you want to make use of the GPU based verification algorithms, you will need a CUDA-enabled GPU. + +### Installing Auto-Verify + +First, install [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/). Miniconda is used to manage the environments of different verification tools, other environment managers will _not_ work. +!!! warning + + Anaconda can fail trying to install environments in some cases where Miniconda does not. + +After Miniconda is installed, setup Auto-Verify by running the following commands: + +``` +> conda create -n auto-verify python=3.10 +> conda activate auto-verify +> pip install auto-verify +``` + +To check if the installation was succesful, run: + +```bash +> auto-verify --version +``` + +### Installing Verification Tools + +Currently, Auto-Verify supports the following verifiers: + +- [nnenum](https://github.com/stanleybak/nnenum) (_Stanley Bak_) +- [AB-Crown](https://github.com/Verified-Intelligence/alpha-beta-CROWN) (_Zhang et al_) +- [VeriNet](https://github.com/vas-group-imperial/VeriNet) (_VAS Group_) +- [Oval-BaB](https://github.com/oval-group/oval-bab) (_OVAL Research Group_) + +These verifiers can be installed as follows: + +``` +> auto-verify install nnenum +> auto-verify install abcrown +> auto-verify install verinet +> auto-verify install ovalbab +``` + +To uninstall a verifier, run: + +``` +> auto-verify uninstall [verifier] +``` + +### Downloading Datasets + +If you have your own models to use with Auto-Verify, you can use those. However, if you just want to try it out, we recommend using the datasets that are provided for the VNNCOMP competition. These datasets are already made compatible with Auto-Verify and APIs are available to directly work with them. You can find the datasets from the previous years over here: + +- [VNNCOMP 2023](https://github.com/stanleybak/vnncomp2023) +- [VNNCOMP 2022](https://github.com/stanleybak/vnncomp2022) +- [VNNCOMP 2021](https://github.com/stanleybak/vnncomp2021) +- [VNNCOMP 2020](https://github.com/verivital/vnn-comp) diff --git a/docs/index.md b/docs/index.md index 5987550..1aad0c2 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,55 +1,25 @@ -# Auto-Verify +# About Auto-Verify Auto-Verify is a tool that provides interfaces, parameter spaces and installation managers for different neural network verification tools. -## Getting Started +You can use the following guides to get started with Auto-Verify: -!!! warning +- [Getting Started](getting-started.md) +- [Examples](examples.md) +- [Contributing](contributing.md) - Auto-Verify has only been tested for Linux and will not work on MacOS and Windows. + -### Installing Auto-Verify +## Included verification algorithms -First, install [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/). Miniconda is used to manage the environments of different verification tools, other environment managers will _not_ work. +For the creation of a portfolio of different verification algorithms, Auto-Verify includes the following verification algorithms: -!!! warning - - Anaconda can fail trying to install environments in some cases where Miniconda does not. - -After Miniconda is installed, setup Auto-Verify by running the following commands: - -``` -> conda create -n auto-verify python=3.10 -> conda activate auto-verify -> pip install auto-verify -``` - -To check if the installation was succesful, run: - -```bash -> auto-verify --version -``` - -### Installing Verification Tools - -Currently, Auto-Verify supports the following verifiers: +### CPU based verification algorithms - [nnenum](https://github.com/stanleybak/nnenum) (_Stanley Bak_) -- [AB-Crown](https://github.com/Verified-Intelligence/alpha-beta-CROWN) (_Zhang et al_) - [VeriNet](https://github.com/vas-group-imperial/VeriNet) (_VAS Group_) -- [Oval-BaB](https://github.com/oval-group/oval-bab) (_OVAL Research Group_) - -These verifiers can be installed as follows: -``` -> auto-verify install nnenum -> auto-verify install abcrown -> auto-verify install verinet -> auto-verify install ovalbab -``` +### GPU based verification algorithms -To uninstall a verifier, run: - -``` -> auto-verify uninstall [verifier] -``` +- [AB-Crown](https://github.com/Verified-Intelligence/alpha-beta-CROWN) (_Zhang et al_) +- [Oval-BaB](https://github.com/oval-group/oval-bab) (_OVAL Research Group_) diff --git a/mkdocs.yml b/mkdocs.yml index 3381d5f..28ed11c 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -43,6 +43,8 @@ markdown_extensions: - def_list nav: - - Auto-Verify: index.md - - How-To Guides: how-to-guides.md + - About: index.md + - Getting started: getting-started.md + - Examples: examples.md - API: api.md + - Contributing: contributing.md