Inverse Reinforcement Learning Algorithm implementation with python.
My seminar paper can be found in paper, which is based on IRLwPython version 0.0.1
Implementation of the Maximum Entropy inverse reinforcement learning algorithm from [1] and is based on the implementation of lets-do-irl. It is an IRL algorithm using Q-Learning with a Maximum Entropy update function.
Implementation of the maximum entropy inverse reinforcement learning algorithm from [1] and is based on the implementation of lets-do-irl. It is an IRL algorithm using q-learning with a maximum entropy update function for the IRL reward estimation. The next action is selected based on the maximum of the q-values.
An implementation of the maximum entropy inverse reinforcement learning algorithm, which uses a neural-network for the actor. The estimated irl-reward is learned similar as in MEIRL. It is an IRL algorithm using deep q-learning with a maximum entropy update function. The next action is selected based on an epsilon-greedy algorithm and the maximum of the q-values.
MEDRL is a RL implementation of the MEDIRL algorithm. This algorithm gets the real rewards directly from the environment, instead of estimating IRL rewards. The NN architecture and action selection is the same as in MEDIRL.
The Mountaincar-v0 is used for evaluating the different algorithms. Therefore, the implementation of the MDP for the Mountaincar from gym is used.
The expert demonstrations for the Mountaincar-v0 are the same as used in lets-do-irl.
Heatmap of Expert demonstrations with 400 states:
The following tables compare the result of training and testing the two IRL algorithms Maximum Entropy and Maximum Entropy Deep. Furthermore, results for the RL algorithm Maximum Entropy Deep algorithm are shown, to highlight the differences between IRL and RL.
Algorithm | Training Curve after 1000 Episodes | Training Curve after 5000 Episodes |
---|---|---|
Maximum Entropy IRL | ||
Maximum Entropy Deep IRL | ||
Maximum Entropy Deep RL |
Algorithm | Testing Results: 100 Runs |
---|---|
Maximum Entropy IRL | |
Maximum Entropy Deep IRL | |
Maximum Entropy Deep RL |
The implementation of MaxEntropyIRL and MountainCar is based on the implementation of: lets-do-irl
[1] BD. Ziebart, et al., "Maximum Entropy Inverse Reinforcement Learning", AAAI 2008.
cd IRLwPython
pip install .
usage: irl-runner [-h] [--version] [--training] [--testing] [--render] ALGORITHM
Implementation of IRL algorithms
positional arguments:
ALGORITHM Currently supported training algorithm: [max-entropy, max-entropy-deep, max-entropy-deep-rl]
options:
-h, --help show this help message and exit
--version show program's version number and exit
--training Enables training of model.
--testing Enables testing of previously created model.
--render Enables visualization of mountaincar.