Papers • Introduction • Tutorials • Survey • Problem Settings • Theory
Dissertations • Code & Library • Scholars • Applications
Contributing - 加入本项目
If you find any valuable researches, please feel free to pull request or contact ruihe.cs@gmail.com to update this repository. Comments and suggestions are also very welcome!
By conference - 按会议分类:ICML / NeurIPS / ICLR / AAAI / IJCAI / ACL / CVPR / ICCV
By journal - 按期刊分类:AI / TPAMI / IJCV / JMLR
By degree - 按学位论文分类:Master / PhD
Constructed in a problem-orientated approach, which is easy for users to locate and track the problem. 基于以问题为导向的分类方式,以方便读者准确定位以及跟踪相关问题。
- Taxonomy of Strategies - 主动学习技术分类
- AL Problem Settings - 问题场景
- AL in other AI Fields - 其他人工智能领域中的主动学习
- Deep AL - 深度主动学习
- Practical Considerations - 运用主动学习时的实际考虑
- AL Applications (Scientific & Industrial) - 主动学习在科学及工业界的应用
Problem - 面向的问题: High labeling cost is common in machine learning community. Acquiring a heavy number of annotations hindering the application of machine learning methods.
Essence / Assumption - 本质 / 基础假设: Not all the instances are equally important to the desired task, so only labeling the more important instances might bring cost reduction.
When we talk about active learning, we talk about - 当我们在谈论主动学习时,我们指的是:
- an approach to reduce the annotation cost in machine learning.
- the ways to select the most important instances for the corresponding tasks.
- (in most cases) an interactive labeling manner between algorithms and oracles.
- a machine learning setting where human experts could be involved.
There have been several reviews / surveys / benchmarks for this topic.
- Active learning: theory and applications [2001]
- Active Learning Literature Survey (Recommend to read) [2009]
- A survey on instance selection for active learning [2012]
- Active Learning: A Survey [2014]
- Active Learning Query Strategies for Classification, Regression, and Clustering: A Survey [2020, Journal of Computer Science and Technology]
- A Survey of Active Learning for Text Classification using Deep Neural Networks [2020]
- A Survey of Deep Active Learning [2020]
- Active Learning: Problem Settings and Recent Developments [2020]
- From Model-driven to Data-driven: A Survey on Active Deep Learning [2021]
- Understanding the Relationship between Interactions and Outcomes in Human-in-the-Loop Machine Learning [2021]: HIL, a wider framework.
- A Survey on Cost Types, Interaction Schemes, and Annotator Performance Models in Selection Algorithms for Active Learning in Classification [2021]
- A Comparative Survey of Deep Active Learning [2022]
- A survey on online active learning [2023]
- Human‐in‐the‐loop machine learning: a state of the art [2023, Artificial Intelligence Review]
- A Comparative Survey: Benchmarking for Pool-based Active Learning [2021, IJCAI]
- A Framework and Benchmark for Deep Batch Active Learning for Regression [2022]
- Re-Benchmarking Pool-Based Active Learning for Binary Classification [2023]
- LabelBench: A Comprehensive Framework for Benchmarking Label-Efficient Learning [2023]
Lecture Topic | Year | Lecturer | Occasion |
---|---|---|---|
Active learning and transfer learning at scale with R and Python | 2018 | - | KDD |
Active Learning from Theory to Practice | 2019 | Robert Nowak & Steve Hanneke | ICML |
Overview of Active Learning for Deep Learning | 2021 | Jacob Gildenblat | Personal Blog |
Almost all the AL studies are based on the following scenarios. The difference lies in the different sources of the quired samples. The details of these scenarios could see here.
Three scenarios and corresponding tasks:
- pool-based - 基于数据池: select from a pre-collected data pool
- stream-based - 基于数据流: select from a steam of incoming data
- query synthesis - 基于数据生成: generate query instead of selecting data
There are many variants of machine learning problem settings with more advanced tasks. Under these problem settings, AL could be further applied.
Related AL Fields:
- Multi-class AL - 多分类主动学习
- Multi-label AL - 多标签主动学习
- Multi-task AL - 多任务主动学习
- Multi-domain AL - 多领域主动学习:
- Multi-view/modal AL - 多模态主动学习
- Multi-instance AL - 多样本主动学习
Use AL to reduce the cost of annotation in many other AI research fields, where the tasks beyonds simple classification or regression. They either acquire different types of outputs or assume a unusual learning process. So AL algorithms should be revised/developed for these problem settings.
Utilize AL in the following fields (hot topics):
- Computer Vision (CV)
- Natural Language Processing (NLP)
- Domain adaptation / Transfer learning
- One / Few / Zero-shot learning or Meta-Learning
- Graph Processing
- Metric learning / Pairwise comparison/Similarity learning
- Recommendation
- Reinforcement Learning
- Robotics
- Model Interpretability
- Clustering
- (Full list of fields could see here)
There have been many theoretical supports for AL. Most of them focus on finding a performance guarantee or the weakness of AL selection.
(This section has not been finished yet. 本章节当前还未完成.)
Many researches of AL are built on very idealized experimental setting. When AL is used to real life scenarios, the practical situations usually do not perfectly match the assumptions in the experiments. These changes of assumptions lead issues which hinders the application of AL. In this section, the practical considerations are reviewed under different assumptions.
The considerations of: data / oracle / scale / workflow / model training cost / query & feedback types / performance metric / reliability / privacy / others
The details and the full list could see here.
AL has already been used in many real-world applications. For some reasons, the implementations in many companies are confidential. But we can still find many applications from several published papers and websites.
Basically, there are two types of applications: scientific applications & industrial applications.
Name | Languages | Author | Notes |
---|---|---|---|
AL playground | Python(scikit-learn, keras) | Abandoned | |
modAL | Python(scikit-learn) | Tivadar Danka | Keep updating |
libact | Python(scikit-learn) | NTU(Hsuan-Tien Lin group) | |
ALiPy | Python(scikit-learn) | NUAA(Shengjun Huang) | Include MLAL |
pytorch_active_learning | Python(pytorch) | Robert Monarch | Keep updating & include active transfer learning |
DeepAL | Python(scikit-learn, pytorch) | Kuan-Hao Huang | Keep updating & deep neural networks |
BaaL | Python(scikit-learn, pytorch) | ElementAI | Keep updating & bayesian active learning |
lrtc | Python(scikit-learn, tensorflow) | IBM | Text classification |
Small-text | Python(scikit-learn, pytorch) | Christopher Schröder | Text classification |
DeepCore | Python(scikit-learn, pytorch) | Guo et al. | In the coreset selection formulation |
PyRelationAL: A Library for Active Learning Research and Development | Python(scikit-learn, pytorch) | Scherer et al. | |
DeepAL+ | Python(scikit-learn, pytorch) | Zhan | An extension for DeepAL |
ALaaS | Python(scikit-learn) | A*STAR & NTU | Use the stage-level parallellism for AL. |
We also list several scholars who are currently heavily contributing to this research direction.
- Hsuan-Tien Lin
- Shengjun Huang (NUAA)
- Dongrui Wu (Active Learning for Regression)
- Raymond Mooney
- Yuchen Guo
- Steve Hanneke
Several young researchers who provides valuable insights for AL:
- Jamshid Sourati [University of Chicago]: Deep neural networks.
- Stefano Teso [University of Trento]: Interactive learning & Human-in-the-loops.
- Xueyin Zhan [City University of Hong Kong]: Provide several invaluable comparative surveys.
- Katerina Margatina [University of Sheffield]: Provide several good insights, analysis and applications for AL.