Continual Reinforcement Learning in 3D Non-stationary Environments
-
Updated
Jun 16, 2019 - Python
Continual Reinforcement Learning in 3D Non-stationary Environments
This is a pip package implementing Reinforcement Learning algorithms in non-stationary environments supported by the OpenAI Gym toolkit.
Code associated with the NeurIPS19 paper "Weighted Linear Bandits in Non-Stationary Environments"
Experiments for paper "Online Learning with Costly Features in Non-stationary Environments"
Queue-Based Resampling (QBR, ICANN 2018)
Code for the manuscript 'Hierarchy of prediction errors shapes the learning of context-dependent sensory representations'
R package to apply the transformed-stationary extreme value analysis
The implementation of the Diversity Pool algorithm, proposed in the paper "Diversity-Based Pool of Models for Dealing with Recurring Concepts" and presented at IJCNN '18
Repo for course CSC2558: "Intelligent Adaptive Interventions" project in nonstationary contextual bandits.
Matlab laboratories for the course "Online Learning and Monitoring" at Politecnico di Milano
Mitigating the Stability-Plasticity Dilemma in Adaptive Train Scheduling with Curriculum-Driven Continual DQN Expansion
Work for CDC2020
Add a description, image, and links to the non-stationary-environment topic page so that developers can more easily learn about it.
To associate your repository with the non-stationary-environment topic, visit your repo's landing page and select "manage topics."