The reading group is open to anyone who is interested in machine learning and who wants to meet up regularly to discuss ML research papers. If you want to get added to the mailing list or have any other questions feel free to contact Joel Oskarsson.
For HT 2024 we will continue with the format where we stick to papers on the same topic for two subsequent sessions. The first session (September 4) will be devoted to deciding all the topics for the fall.
- One or two people are designated the host for each topic (two sessions). They are responsible for choosing the papers to be discussed and for leading the discussion during the session.
- We meet Wednesdays 10:15-12:00 on odd weeks.
- We will meet in Thomas Bayes, B-building (map).
- Choose one paper for each session related to your topic that you think would be interesting to discuss in the reading group. In order to set a focus for the reading group we came up with the following short guidelines for how to choose papers:
- The main topic of the paper should be core machine learning research. Try to avoid papers that just apply well known machine learning methods to specific application areas.
- Make sure the paper is of high quality. Read through it yourself and try to gauge its quality. As a guideline, think that it should be publishable at a top machine learning conference (i.e. the paper should be of such quality, it does not have to actually be a short conference paper).
- If you want a second opinion on whether a paper is suitable feel free to ask anyone who has been in the reading group previous years.
- Think about how the two papers in your topic relate to each other. For example, it can be nice to discuss first an introductory paper and then the state-of-the-art, or two different approaches to/perspectives on the same underlying problem.
- Send out a link to the paper on the mailing list at least one week in advance.
- As the host it is also good to somewhat lead the discussion during the session. If you want you can give a short description of why you chose this paper, but there is no need for any proper presentation. It might be a good idea to come to the session prepared with a few discussion points, just to keep the conversation going.
Week 36 (Sep 4) (NOTE: Even week)
Decide on topics for the fall.
Week 37 (Sep 11)
Topic: Flow matching
- Host: Filip, Dong
Flow Matching for Generative Modeling
Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, Matthew Le
https://openreview.net/forum?id=PqvMRDCJT9t
Our rating: 3.60 ± 0.49
Week 39 (Sep 25) (NOTE: Different time, 13:15 - 15:00)
Topic: Machine Learning for Climate Science
- Host: Joel, Martin
DYffusion: A Dynamics-informed Diffusion Model for Spatiotemporal Forecasting
Salva Rühling Cachay, Bo Zhao, Hailey Joren, Rose Yu
https://proceedings.neurips.cc/paper_files/paper/2023/hash/8df90a1440ce782d1f5607b7a38f2531-Abstract-Conference.html
(Optional reading)
- Probabilistic Emulation of a Global Climate Model with Spherical DYffusion
Salva Rühling Cachay, Brian Henn, Oliver Watt-Meyer, Christopher S. Bretherton, Rose Yu
https://arxiv.org/abs/2406.14798
Our rating: 3.22 ± 0.63
Week 43 (Oct 23)
Topic: Machine Learning for Climate Science
- Host: Joel, Martin
Neural General Circulation Models for Weather and Climate
Dmitrii Kochkov, Janni Yuval, Ian Langmore, Peter Norgaard, Jamie Smith, Griffin Mooers, Milan Klöwer, James Lottes, Stephan Rasp, Peter Düben, Sam Hatfield, Peter Battaglia, Alvaro Sanchez-Gonzalez, Matthew Willson, Michael P. Brenner, Stephan Hoyer
https://arxiv.org/abs/2311.07222
- We will be focusing on the main paper + appendix B, C, D and G according to the interest of attendees.
Our rating: 4.00 ± 0.57
Week 45 (Nov 06)
Topic: Flow matching
- Host: Filip, Dong
Discrete Flow Matching
Itai Gat, Tal Remez, Neta Shaul, Felix Kreuk, Ricky T. Q. Chen, Gabriel Synnaeve, Yossi Adi, Yaron Lipman
https://arxiv.org/abs/2407.15595
- Related video: https://www.youtube.com/watch?v=fuzYeqp1n5g
Our rating: 3.00 ± 0.63
Week 47 (Nov 20)
Topic: Parameter-efficient fine-tuning
- Host: Hari, Erik
LoRA: Low-Rank Adaptation of Large Language Models
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen
https://arxiv.org/abs/2106.09685
Our rating: 3.00 ± 0.58
Week 49 (Dec 04)
Host: Joel
Position: The Platonic Representation Hypothesis
Minyoung Huh, Brian Cheung, Tongzhou Wang, Phillip Isola
https://proceedings.mlr.press/v235/huh24a.html
Our rating: 3.00 ± 0.82
Week 51 (Dec 18)
Topic: Parameter-efficient fine-tuning
- Host: Hari, Erik
5: Very Strong Accept:
- Technically flawless paper
- with groundbreaking impact on at least one area of ML and excellent impact on multiple areas of ML,
- with flawless evaluation, resources, and reproducibility,
- and no unaddressed ethical considerations.
4: Strong Accept:
- Technically strong paper, with novel ideas,
- excellent impact on at least one area of ML or high-to-excellent impact on multiple areas of ML,
- with excellent evaluation, resources, and reproducibility,
- and no unaddressed ethical considerations.
3: Accept:
- Technically solid paper,
- with high impact on at least one sub-area of ML or moderate-to-high impact on more than one area of ML,
- with good-to-excellent evaluation, resources, reproducibility,
- and no unaddressed ethical considerations.
2: Weak Accept:
- Technically solid,
- moderate-to-high impact paper,
- with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
1: Borderline accept:
- Technically solid paper
- where reasons to accept outweigh reasons to reject, e.g., limited evaluation.