Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve documentation #92

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 47 additions & 0 deletions docs/contributing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Contriburing

Auto-Verify is an open source project that accepts contributions. You can find open issues [here](https://github.com/ADA-research/auto-verify/issues).

## Setting up the development environment

### Forking the repository

To make changes to the code, you will need to fork the [repository](https://github.com/ADA-research/auto-verify).

### Setting up the conda environment

To use set up your development environment for Auto-Verify, you will need to install [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/). Other environment managers will not work.

After installing Miniconde, you will need to create a conda environment. For example, you can use the commands below to create an environment for Auto-Verify.

```
> conda create -n auto-verify python=3.10
> conda activate auto-verify
```

### Setting up Auto-Verify for development

To install your local package of Auto-Verify, you will need to install an editable version. You can do this using the following command in your forked repository:

```
> pip install -e .
```

After doing this, you can run scripts locally and make your changes to the code.

!!! warning

You will still need to install the different verification algorithms with the CLI tool to use them.

### Running tests with Tox

To ensure that all the original code is working and everything is neat and tidy, you should run Tox to run the tests, linter and integration tests. To do this, you will need to install pytest and tox before you can run it. You can do this with the following commands.

```
> pip install tox
> tox
```

## Submitting your code

After you have made modifications to your code, open a pull request [here](https://github.com/ADA-research/auto-verify/pulls). If a maintainer approves of the changes and the pipeline succeeds, the pull request will be merged into the main repository.
68 changes: 41 additions & 27 deletions docs/how-to-guides.md → docs/examples.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,43 @@
# How-To-Guides
# Examples

In this section there are a couple examples of how you can use Auto-Verify.

## Using Datasets

If you have your own models to use with Auto-Verify, you can use those. However, if you just want to try it out, we recommend using the datasets that are provided for the VNNCOMP competition. Auto-Verify supports reading benchmarks defined in VNNCOMP style, which are benchmarks with the following structure:

```
vnncomp2022
└── test_props
├── instances.csv
├── onnx
│   ├── test_nano.onnx
│   ├── test_sat.onnx
│   └── test_unsat.onnx
└── vnnlib
├── test_nano.vnnlib
└── test_prop.vnnlib
```

Where `instances.csv` is a `csv` file with 3 columns: network, property, timeout. For example, the `test_props` directory contains the following 3 verification instaces:

```
onnx/test_sat.onnx,vnnlib/test_prop.vnnlib,60
onnx/test_unsat.onnx,vnnlib/test_prop.vnnlib,60
onnx/test_nano.onnx,vnnlib/test_nano.vnnlib,60
```

You can find the datasets from the previous years here:

- VNNCOMP 2023: https://github.com/stanleybak/vnncomp2023

- VNNCOMP 2022: https://github.com/stanleybak/vnncomp2022

- VNNCOMP 2021: https://github.com/stanleybak/vnncomp2021

- VNNCOMP 2020: https://github.com/verivital/vnn-comp

After downloading one of these datasets, you need to unzip the data files that you need or use the scripts provided in the repositories to prepare the files.

## Verifying Properties

Expand Down Expand Up @@ -29,32 +68,7 @@ if __name__ == "__main__":

### Running VNNCOMP Benchmarks

Auto-Verify supports reading benchmarks defined in VNNCOMP style, which are benchmarks with the following structure:

```
vnncomp2022
└── test_props
├── instances.csv
├── onnx
│   ├── test_nano.onnx
│   ├── test_sat.onnx
│   └── test_unsat.onnx
└── vnnlib
├── test_nano.vnnlib
└── test_prop.vnnlib
```

Where `instances.csv` is a `csv` file with 3 columns: network, property, timeout. For example, the `test_props` directory contains the following 3 verification instaces:

```
onnx/test_sat.onnx,vnnlib/test_prop.vnnlib,60
onnx/test_unsat.onnx,vnnlib/test_prop.vnnlib,60
onnx/test_nano.onnx,vnnlib/test_nano.vnnlib,60
```

VNNCOMP Benchmarks can found at the following links: [2022](https://github.com/ChristopherBrix/vnncomp2022_benchmarks/tree/main/benchmarks), [2023](https://github.com/ChristopherBrix/vnncomp2023_benchmarks/tree/main/benchmarks). Make sure to unzip all files inside the benchmark after you have downloaded it.

Below is a code snippet that runs this benchmark. Note the `Path` pointing to the benchmark location.
Below is a code snippet that runs the VNNCOMP test_props benchmark. Note the `Path` pointing to the benchmark location.

```py
from pathlib import Path
Expand Down
59 changes: 59 additions & 0 deletions docs/getting-started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
### System requirements

Auto-Verify has only been tested for Linux and will not work on MacOS and Windows. Alternatively, using [Windows Subsystem for Linux](https://learn.microsoft.com/nl-nl/windows/wsl/about) is also an option.

Additionally, if you want to make use of the GPU based verification algorithms, you will need a CUDA-enabled GPU.

### Installing Auto-Verify

First, install [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/). Miniconda is used to manage the environments of different verification tools, other environment managers will _not_ work.
!!! warning

Anaconda can fail trying to install environments in some cases where Miniconda does not.

After Miniconda is installed, setup Auto-Verify by running the following commands:

```
> conda create -n auto-verify python=3.10
> conda activate auto-verify
> pip install auto-verify
```

To check if the installation was succesful, run:

```bash
> auto-verify --version
```

### Installing Verification Tools

Currently, Auto-Verify supports the following verifiers:

- [nnenum](https://github.com/stanleybak/nnenum) (_Stanley Bak_)
- [AB-Crown](https://github.com/Verified-Intelligence/alpha-beta-CROWN) (_Zhang et al_)
- [VeriNet](https://github.com/vas-group-imperial/VeriNet) (_VAS Group_)
- [Oval-BaB](https://github.com/oval-group/oval-bab) (_OVAL Research Group_)

These verifiers can be installed as follows:

```
> auto-verify install nnenum
> auto-verify install abcrown
> auto-verify install verinet
> auto-verify install ovalbab
```

To uninstall a verifier, run:

```
> auto-verify uninstall [verifier]
```

### Downloading Datasets

If you have your own models to use with Auto-Verify, you can use those. However, if you just want to try it out, we recommend using the datasets that are provided for the VNNCOMP competition. These datasets are already made compatible with Auto-Verify and APIs are available to directly work with them. You can find the datasets from the previous years over here:

- [VNNCOMP 2023](https://github.com/stanleybak/vnncomp2023)
- [VNNCOMP 2022](https://github.com/stanleybak/vnncomp2022)
- [VNNCOMP 2021](https://github.com/stanleybak/vnncomp2021)
- [VNNCOMP 2020](https://github.com/verivital/vnn-comp)
54 changes: 12 additions & 42 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,55 +1,25 @@
# Auto-Verify
# About Auto-Verify

Auto-Verify is a tool that provides interfaces, parameter spaces and installation managers for different neural network verification tools.

## Getting Started
You can use the following guides to get started with Auto-Verify:

!!! warning
- [Getting Started](getting-started.md)
- [Examples](examples.md)
- [Contributing](contributing.md)

Auto-Verify has only been tested for Linux and will not work on MacOS and Windows.
<!-- TODO: Write some information about the portfolio construction algorithms -->

### Installing Auto-Verify
## Included verification algorithms

First, install [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/). Miniconda is used to manage the environments of different verification tools, other environment managers will _not_ work.
For the creation of a portfolio of different verification algorithms, Auto-Verify includes the following verification algorithms:

!!! warning

Anaconda can fail trying to install environments in some cases where Miniconda does not.

After Miniconda is installed, setup Auto-Verify by running the following commands:

```
> conda create -n auto-verify python=3.10
> conda activate auto-verify
> pip install auto-verify
```

To check if the installation was succesful, run:

```bash
> auto-verify --version
```

### Installing Verification Tools

Currently, Auto-Verify supports the following verifiers:
### CPU based verification algorithms

- [nnenum](https://github.com/stanleybak/nnenum) (_Stanley Bak_)
- [AB-Crown](https://github.com/Verified-Intelligence/alpha-beta-CROWN) (_Zhang et al_)
- [VeriNet](https://github.com/vas-group-imperial/VeriNet) (_VAS Group_)
- [Oval-BaB](https://github.com/oval-group/oval-bab) (_OVAL Research Group_)

These verifiers can be installed as follows:

```
> auto-verify install nnenum
> auto-verify install abcrown
> auto-verify install verinet
> auto-verify install ovalbab
```
### GPU based verification algorithms

To uninstall a verifier, run:

```
> auto-verify uninstall [verifier]
```
- [AB-Crown](https://github.com/Verified-Intelligence/alpha-beta-CROWN) (_Zhang et al_)
- [Oval-BaB](https://github.com/oval-group/oval-bab) (_OVAL Research Group_)
6 changes: 4 additions & 2 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ markdown_extensions:
- def_list

nav:
- Auto-Verify: index.md
- How-To Guides: how-to-guides.md
- About: index.md
- Getting started: getting-started.md
- Examples: examples.md
- API: api.md
- Contributing: contributing.md
Loading