- Clone this repository
- Clone submodules:
git submodule update --init --recursive
- Create conda env from
env.yml
file:conda env create -f env.yml
- Install dependencies
python -m pip install requirements.txt
- Install pkg
python setup.py develop
- Move into each subdir inside
third_parties
and executepython setup.py develop --all
- Specify
base_dir
atconfs/config.yaml
as the absolute path of this project
All the data are contained inside the data
directory.
- Download test-scenes
http://dl.fbaipublicfiles.com/habitat/habitat-test-scenes.zip
- Unzip inside project directory
- Suggestion: you can keep data in a separate folder and use soft links (
ln -s /path/to/dataset /path/to/project/data
)
python scripts/run_exp.py
run training or deployment of a policy. More information about "RL baselines" and our RL policy below.
To replay an experiment, use the following
python scripts/visualize_exp.py replay.episode_id={ID episode} replay.exp_name={PATH TO EPISODE} replay.modalities="['rgb', 'depth','semantic']"
The following learned baselines are implemented:
neuralslam
: start fromconfs/habitat/gibson_neuralslam.yaml
seal-v0
: start fromconfs/habitat/gibson_seal.yaml
curiosity-v0
: start fromconfs/habitat/gibson_semantic_curiosity.yaml
The following classical baselines are implemented:
randomgoalsbaseline
frontierbaseline-v1
(frontierbaseline-v2
,frontierbaseline-v3
)bouncebaseline
rotatebaseline
randombaseline
Start from confs/habitat/gibson_goal_exploration.yaml
CHECKPOINT_FOLDER
folder in which checkpoints are savedTOTAL_NUM_STEPS
max number of training steps- under
ppo
:replanning_steps
how often to run the policynum_global_steps
how often to train the policysave_periodic
how often to save a checkpointload_checkpoint_path
full path to a checkpoint to load at startload_checkpoint
set True to loadload_checkpoint_path
visualize
if True, debug images are shown
Environments:
SemanticDisagreement-v0
reward: sum(disagreement_t)
Environments for the RL baselines are also provided:
SemanticCuriosity-v0
(Semantic Curiosity)sealenv-v0
(SEAL)ExpSlam-v0
(NeuralSLAM)
Policies:
goalexplorationbaseline-v0
State: disagreement_t, map_t, agent pose
Checkpoints:
- Ours {ADD LINK}
Start from confs/habitat/gibson_goal_exploration.yaml
replanning_steps
how often to run the policyload_checkpoint_path
full path to a checkpoint to load at startload_checkpoint
set to True
Scenes models | Extract path | Archive size |
---|---|---|
Gibson | data/scene_datasets/gibson/{scene}.glb |
1.5 GB |
MatterPort3D | data/scene_datasets/mp3d/{scene}/{scene}.glb |
15 GB |
You can download the task at the following link {ADD LINK}, unzip and put it in data/datasets/objectnav/gibson/v1.1
Task | Scenes | Link | Extract path | Config to use | Archive size |
---|---|---|---|---|---|
Point goal navigation | Gibson | pointnav_gibson_v1.zip | data/datasets/pointnav/gibson/v1/ |
datasets/pointnav/gibson.yaml |
385 MB |
Point goal navigation corresponding to Sim2LoCoBot experiment configuration | Gibson | pointnav_gibson_v2.zip | data/datasets/pointnav/gibson/v2/ |
datasets/pointnav/gibson_v2.yaml |
274 MB |
Point goal navigation | MatterPort3D | pointnav_mp3d_v1.zip | data/datasets/pointnav/mp3d/v1/ |
datasets/pointnav/mp3d.yaml |
400 MB |
- Follow instruction in the main Habitat-lab repository
- Ask for the license from Gibson website
https://stanfordvl.github.io/iGibson/dataset.html
- Download gibson tiny with
wget https://storage.googleapis.com/gibson_scenes/gibson_tiny.tar.gz
- Follow instructions at Habitat-sim to generate gibson semantic
- detectron2 >= 0.5
- torch >= 1.9
- pytorch-lightning >= 1.5
- habitat-sim = 0.2
- habitat-lab
- torchmetrics >= 0.6
If you want to contribute to the project, I suggest to install requirements-dev.txt
and abilitate pre-commit
python -m pip install -r requirements-dev.txt
pre-commit install