- Clone this repo to your local filesystem with dependent submodules
(base) $ git clone --recurse-submodules https://github.com/jaredyam/auto-inpainting.git
cd
to the project directory and create a new virtual environment with conda(base) $ cd <path/to/auto-inpainting> (base) $ conda create --name auto-inpainting python=3.8 (base) $ conda activate auto-inpainting (auto-inpainting) $ pip install -r requirements.txt
Before making inference, we need first prepare pre-trained models (from the original repo):
- segmentation model:
- U2Net: download the pre-trained model u2net.pth (176.3 MB) from GoogleDrive or Baidu Wangpan (code: pf9k) and put it into the directory
./segmentation-models/U-2-Net/saved_models/u2net/
.
- U2Net: download the pre-trained model u2net.pth (176.3 MB) from GoogleDrive or Baidu Wangpan (code: pf9k) and put it into the directory
- inpainting model:
- LAMA: download the folder big-lama and put it in to the directory
./inpainting-models/lama/
.
- LAMA: download the folder big-lama and put it in to the directory
Then, make inference on a single image by running
(auto-inpainting) $ bash inference.sh <path/to/input-image>
Original | Demo |
---|---|