Skip to content

This repository contains a list of papers related to model lens

Notifications You must be signed in to change notification settings

zhou-chushu/PaperList-lens

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 

Repository files navigation

PaperList

introduction

This repository contains a list of papers related to model lens and reasoning

papers

survey

  • Large Language Models: A Survey. Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Amatriain, Jianfeng Gao. arXiv preprint arXiv:2402.06196, 2024. [pdf]

  • Challenges and applications of large language models. Kaddour, Jean and Harris, Joshua and Mozes, Maximilian and Bradley, Herbie and Raileanu, Roberta and McHardy, Robert. arXiv preprint arXiv:2307.10169, 2023. [pdf]

  • A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications. Sahoo, Pranab and Singh, Ayush Kumar and Saha, Sriparna and Jain, Vinija and Mondal, Samrat and Chadha, Aman. arXiv preprint arXiv:2402.07927, 2024. [pdf]

benchmark

  • Training Verifiers to Solve Math Word Problems. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, John Schulman. CoRR, 2021. [pdf, code]

  • Measuring mathematical problem solving with the math dataset. Hendrycks, Dan and Burns, Collin and Kadavath, Saurav and Arora, Akul and Basart, Steven and Tang, Eric and Song, Dawn and Steinhardt, Jacob. NeurIPS, 2022. [pdf, code, dataset]

  • Let's Verify Step by Step. Lightman, Hunter and Kosaraju, Vineet and Burda, Yura and Edwards, Harri and Baker, Bowen and Lee, Teddy and Leike, Jan and Schulman, John and Sutskever, Ilya and Cobbe, Karl. arXiv preprint arXiv:2305.20050, 2023. [pdf, dataset]

  • Orca-Math: Unlocking the potential of SLMs in Grade School Math. Mitra, Arindam and Khanpour, Hamed and Rosset, Corby and Awadallah, Ahmed. arXiv preprint arXiv:2402.14830, 2024. [pdf, dataset]

  • MathScale: Scaling Instruction Tuning for Mathematical Reasoning. Zhengyang Tang, Xingxing Zhang, Benyou Wang, Furu Wei. arXiv preprint arXiv:2403.02884, 2024. [pdf]

  • AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. Yifan Zhang, Yifan Luo, Yang Yuan, Andrew Chi-Chih Yao. arXiv preprint arXiv:2402.07625, 2024. [pdf, code, dataset]

  • Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models. Levy, Mosh and Jacoby, Alon and Goldberg, Yoav. arXiv preprint arXiv:2402.14848, 2024. [pdf, dataset]

method

  • Understanding intermediate layers using linear classifier probes. Alain, Guillaume and Bengio, Yoshua. ICLR, 2017. [pdf]

  • interpreting GPT: the logit lens. nostalgebraist. LessWrong, 2020. [url, code]

  • Eliciting Latent Predictions from Transformers with the Tuned Lens. Belrose, Nora and Furman, Zach and Smith, Logan and Halawi, Danny and Ostrovsky, Igor and McKinney, Lev and Biderman, Stella and Steinhardt, Jacob. CoRR, 2023. [pdf, code]

  • Overthinking the truth: Understanding how language models process false demonstrations. Halawi, Danny and Denain, Jean-Stanislas and Steinhardt, Jacob. ICLR, 2024. [pdf, code]

  • Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking. Zelikman, Eric and Harik, Georges and Shao, Yijia and Jayasiri, Varuna and Haber, Nick and Goodman, Noah D. arXiv preprint arXiv:2403.09629, 2024. [pdf, code]

others

  • Do Large Language Models Latently Perform Multi-Hop Reasoning?. Yang, Sohee and Gribovskaya, Elena and Kassner, Nora and Geva, Mor and Riedel, Sebastian. arXiv preprint arXiv:2402.16837, 2024. [pdf]

  • Chain-of-Thought Reasoning Without Prompting. Wang, Xuezhi and Zhou, Denny. arXiv preprint arXiv:2402.10200, 2024. [pdf]

  • Premise Order Matters in Reasoning with Large Language Models. Chen, Xinyun and Chi, Ryan A and Wang, Xuezhi and Zhou, Denny. arXiv preprint arXiv:2402.08939, 2024. [pdf]

  • Teaching Large Language Models to Reason with Reinforcement Learning. Alex Havrilla, Yuqing Du, Sharath Chandra Raparthy, Christoforos Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravinskyi, Eric Hambro, Sainbayar Sukhbaatar, Roberta Raileanu. arXiv preprint arXiv:2403.04642, 2024. [pdf]

  • Residual connections encourage iterative inference. Jastrzebski, Stanislaw and Arpit, Devansh and Ballas, Nicolas and Verma, Vikas and Che, Tong and Bengio, Yoshua. ICLR, 2018. [pdf, code]

  • Confident adaptive language modeling. Schuster, Tal and Fisch, Adam and Gupta, Jai and Dehghani, Mostafa and Bahri, Dara and Tran, Vinh and Tay, Yi and Metzler, Donald. NeurIPS, 2022. [pdf, code]

  • All bark and no bite: Rogue dimensions in transformer language models obscure representational quality. Timkey, William and van Schijndel, Marten. EMNLP, 2021. [pdf, code]

  • Orca-Math: Unlocking the potential of SLMs in Grade School Math. Mitra, Arindam and Khanpour, Hamed and Rosset, Corby and Awadallah, Ahmed. arXiv preprint arXiv:2402.14830, 2024. [pdf]

  • Locating and editing factual associations in GPT. Meng, Kevin and Bau, David and Andonian, Alex and Belinkov, Yonatan. NeurIPS, 2022. [pdf, code]

  • Solving quantitative reasoning problems with language models. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. arXiv preprint arXiv:2206.14858, 2022. [pdf]

  • Automatic chain of thought prompting in large language modelsZhang, Zhuosheng and Zhang, Aston and Li, Mu and Smola, Alex. ICLR, 2023. [pdf, code]

  • Finetuned language models are zero-shot learners. Wei, Jason and Bosma, Maarten and Zhao, Vincent Y and Guu, Kelvin and Yu, Adams Wei and Lester, Brian and Du, Nan and Dai, Andrew M and Le, Quoc V. ICLR, 2021. [pdf, code]

  • Scaling instruction-finetuned language models. Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Yunxuan and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and others. arXiv preprint arXiv:2210.11416, 2022. [pdf, code]

  • Bootstrapping reasoning with reasoning. Zelikman, Eric and Wu, Yuhuai and Mu, Jesse and Goodman, Noah. NeurIPS, 2022. [pdf, code]

  • Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei. arXiv preprint arXiv:2308.09583, 2023. [pdf, code]

  • Language models can teach themselves to program better. Haluptzok, Patrick and Bowers, Matthew and Kalai, Adam Tauman. ICLR, 2023. [pdf, code]

About

This repository contains a list of papers related to model lens

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published