Skip to content

Commit

Permalink
minor installation instruction update
Browse files Browse the repository at this point in the history
  • Loading branch information
chrischoy committed Dec 31, 2020
1 parent 84c52b7 commit 1b10a8e
Showing 1 changed file with 17 additions and 24 deletions.
41 changes: 17 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,10 @@ We visualized a sparse tensor network operation on a sparse tensor, convolution,
You can install the Minkowski Engine with `pip`, with anaconda, or on the system directly. If you experience issues installing the package, please checkout the [the installation wiki page](https://github.com/NVIDIA/MinkowskiEngine/wiki/Installation).
If you cannot find a relevant problem, please report the issue on [the github issue page](https://github.com/NVIDIA/MinkowskiEngine/issues).

- [PIP](https://github.com/NVIDIA/MinkowskiEngine#pip) installation
- [Conda](https://github.com/NVIDIA/MinkowskiEngine#anaconda) installation
- [Python](https://github.com/NVIDIA/MinkowskiEngine#system-python) installation

### Pip

The MinkowskiEngine is distributed via [PyPI MinkowskiEngine][pypi-url] which can be installed simply with `pip`.
Expand All @@ -78,15 +82,14 @@ First, install pytorch following the [instruction](https://pytorch.org). Next, i
sudo apt install libopenblas-dev
pip install torch
pip install -U MinkowskiEngine --install-option="--blas=openblas" -v
# For pip installation from the latest source
# pip install -U git+https://github.com/NVIDIA/MinkowskiEngine
```

### Pip from the latest source
If you want to specify arguments for the setup script, please refer to the following command.

```
sudo apt install libopenblas-dev
pip install torch
export CXX=g++-7
# Uncomment some options if things don't work
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine \
# \ # uncomment the following line if you want to force cuda installation
Expand All @@ -102,29 +105,16 @@ pip install -U git+https://github.com/NVIDIA/MinkowskiEngine \
### Anaconda

We recommend `python>=3.6` for installation.


#### 1. Create a conda virtual environment and install requirements.

First, follow [the anaconda documentation](https://docs.anaconda.com/anaconda/install/) to install anaconda on your computer.

```
conda create -n py3-mink python=3.7
conda activate py3-mink
conda install numpy mkl-include pytorch cudatoolkit=10.2 -c pytorch
```

#### 2. Compilation and installation

```
conda activate py3-mink
git clone https://github.com/NVIDIA/MinkowskiEngine.git
cd MinkowskiEngine
sudo apt install libopenblas-dev
python setup.py install --blas=openblas
conda create -n py3-mink python=3.8
conda activate py3-mink
conda install numpy mkl-include pytorch cudatoolkit=11.0 -c pytorch
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine
```


### System Python

Like the anaconda installation, make sure that you install pytorch with the same CUDA version that `nvcc` uses.
Expand All @@ -144,6 +134,8 @@ git clone https://github.com/NVIDIA/MinkowskiEngine.git
cd MinkowskiEngine
python setup.py install
# To specify blas, CUDA_HOME and force CUDA installation, use the following command
# python setup.py install --blas=openblas --cuda_home=/usr/local/cuda --force_cuda
```


Expand Down Expand Up @@ -229,6 +221,7 @@ For issues not listed on the API and feature requests, feel free to submit
an issue on the [github issue
page](https://github.com/NVIDIA/MinkowskiEngine/issues).


## Known Issues

### Too much GPU memory usage or Frequent Out of Memory
Expand All @@ -238,12 +231,12 @@ However, pytorch is implemented assuming that the number of point, or size of th

Specifically, pytorch caches chunks of memory spaces to speed up allocation used in every tensor creation. If it fails to find the memory space, it splits an existing cached memory or allocate new space if there's no cached memory large enough for the requested size. Thus, every time we use different number of point (number of non-zero elements) with pytorch, it either split existing cache or reserve new memory. If the cache is too fragmented and allocated all GPU space, it will raise out of memory error.

To prevent this, you must clear the cache at regular interval with `torch.cuda.empty_cache()`.
**To prevent this, you must clear the cache at regular interval with `torch.cuda.empty_cache()`.**


### Running the MinkowskiEngine on nodes with a large number of CPUs

The MinkowskiEngine uses OpenMP to parallelize the kernel map generation. However, when the number of threads used for parallelization is too large (e.g. OMP_NUM_THREADS=80), the efficiency drops rapidly as all threads simply wait for multithread locks to be released.

In such cases, set the number of threads used for OpenMP. Usually, any number below 24 would be fine, but search for the optimal setup on your system.

```
Expand Down

0 comments on commit 1b10a8e

Please sign in to comment.