Skip to content

Latest commit

 

History

History
100 lines (74 loc) · 3.85 KB

File metadata and controls

100 lines (74 loc) · 3.85 KB

Learning from Sparse Demonstrations

This project is the implementation of the paper Learning from Sparse Demonstrations, co-authored by Wanxin Jin, Todd D. Murphey, Dana Kulić, Neta Ezer, Shaoshuai Mou. Please find more details in

This repo has been tested with:

  • Ubuntu 20.04.2 LTS, Python 3.8.5, CasADi 3.5.5, Numpy 1.20.1, Scipy 1.6.1, coinor-libipopt-dev 3.11.9-2.2build2.

Project Structure

The current version of the project consists of three folders:

  • CPDP : a package including an optimal control solver, functionalities for differentiating maximum principle, and functionalities to solve the differential maximum principle.
  • JinEnv : an independent package providing various robot environments to simulate on.
  • Examples : various examples to reproduce the experiments in the paper.
  • lib : various helper libraries for obtaining human demonstrations via GUI.
  • test : various test files for testing GUI.

Dependency Packages

Please make sure that the following packages have already been installed before use of the PDP package or JinEnv Package.

$ sudo apt update
$ sudo apt install coinor-libipopt-dev ffmpeg libxcb-xinerama0
$ pip3 install casadi numpy transforms3d scipy matplotlib pyqt5
$ git clone https://github.com/zehuilu/Learning-from-Sparse-Demonstrations
$ cd <ROOT_DIRECTORY>
$ mkdir trajectories data

How to Train Your Robots.

Below is the procedure of how to apply the codes to train your robot to learn from sparse demonstrations.

  • Step 1. Load a robot environment from JinEnv library (specify parameters of the robot dynamics).
  • Step 2. Specify a parametric time-warping function and a parametric cost function (loaded from JinEnv).
  • Step 3. Provide some sparse demonstrations and define the trajectory loss function.
  • Step 4. Set the learning rate and start training your robot (apply CPDP) given initial guesses.
  • Step 5. Done, check and simulate your robot visually (use animation utilities from JinEnv).

The quickest way to hand on the codes is to check and run the examples under the folder ./Examples/ .

There are some parameters for the quadrotor demo, including the 3D space limit and the average speed for estimating the time waypoints.

Run the algorithm with pre-defined waypoints:

$ cd <ROOT_DIRECTORY>
$ python3 Examples/quad_example.py

Run the algorithm with human inputs:

$ cd <ROOT_DIRECTORY>
$ python3 Examples/quad_example_human_input.py

To obtain human input via matplotlib ginput():

$ cd <ROOT_DIRECTORY>
$ python3 test/test_input.py

To obtain human input via a GUI with PyQt5:

$ cd <ROOT_DIRECTORY>
$ python3 test/test_gui.py

Contact Information and Citation

If you have encountered a bug in your implementation of the code, please feel free to let me known via email:

The codes are under regularly update.

If you find this project helpful in your publications, please consider citing our paper.

@article{jin2020learning,
  title={Learning from Sparse Demonstrations},
  author={Jin, Wanxin and Murphey, Todd D and Kuli{\'c}, Dana and Ezer, Neta and Mou, Shaoshuai},
  journal={arXiv preprint arXiv:2008.02159},
  year={2020}
}