In this assignment, we will implement a multi-task movie recommender system based on the classic Matrix Factorization and Neural Collaborative Filtering algorithms. In particular, we will build a model based on the BellKor solution to the Netflix Grand Prize challenge and extend it to predict both likely user-movie interactions and potential scores.
In this assignment, we will look at meta-learning for few shot classification:
- Learn how to process and partition data for meta learning problems, where training is done over a distribution of training tasks.
- Implement and train memory augmented neural networks, a black-box meta-learner that uses a recurrent neural network.
- Analyze the learning performance for different size problems.
- Experiment with model parameters and explore how they improve performance.
In this assignment, we will experiment with two meta-learning algorithms, prototypical networks (protonets) and model-agnostic meta-learning (MAML), for few-shot image classification on the Omniglot dataset:
- Implement both algorithms (given starter code).
- Interpret key metrics of both algorithms.
- Investigate the effect of task composition during protonet training on evaluation.
- Investigate the effect of different inner loop adaptation settings in MAML.
- Investigate the performance of both algorithms on meta-test tasks that have more support data than training tasks do.
This assignment will explore several methods for performing few-shot (and zero-shot) learning with pre-trained language models (LMs), including variants of fine-tuning and in-context learning. The goal of this assignment is to gain familiarity with performing fewshot learning with pre-trained LMs, learn about the relative strengths and weaknesses of fine-tuning and in-context learning, and explore some recent methods proposed for improving on the basic form of these algorithms.