Bandit algorithms simulations and analysis for online learning
This repo is part of my interest to learn more about optimisation for online learning algorithms which are heavily centerd on bandit theory. Based on what I understand, there are different types of bandit problems:
- Multi-armed bandits: Bandits arms are inherently non-differentiable except for their inherent reward function. For multiple arm bandits, the objective is to determine the bandit with the highest reward function via online learning, which is a classic explore-versus-exploit problem.
- Contextual bandits: Bandits with features (aka context) that interact differently with different actions. Different contextual features will require different actions to return the reward. This can be perceived as a classification problem: given input features aka context, what is the right classification of "actions" that will return high accuracy/reward?
This repo is segmented into both Python and R.
- Python:
- Phase 1 (MAB analysis): Comprises coding of certain Multi-Armed Bandit algorithms for experimentation.
- Phase 2 (CB analysis): Implementation of contextual bandit algorithms starting with LinUCB Disjoint and LinUCB Hybrid based on A Contextual-Bandit Approach to Personalized News Article Recommendation.
- Phase 3 (CB analysis): Utilise use
vowpal wabbit
package for online learning for contextual bandits simulation
- R:
- Phase 4 (MAB & CB analysis): Using R library package
contextual
that has a comprehensive ecosystem for different algorithm and policies
- Phase 4 (MAB & CB analysis): Using R library package
Phase 1 MAB analysis includes:
Phase 2 CB analysis (Currently ongoing):
- LinUCB Disjoint Implementation and Analysis with a Dataset
- LinUCB Hybrid Implementation and Analysis with a MovieLens Dataset for Recommender Systems
A portion of the MAB code is based on the book "Bandit Algorithms for Website Optimization" by John Myles White.
Microsoft's vowpal wabbit
package for Python can be found in this Github repo.
The R package for contextual
can be found in this Github repo.