π See our paper: "Make Every Token Count: A Systematic Survey on Decoding Methods for Foundation Models"
π§ Please let us know if you find a mistake or have any suggestions by e-mail: hwang219@hawk.iit.edu
- Advanced decoding methods can enhance generation at inference time, providing an effective and efficient way to control outputs from LLMs and LVLMs.
- πββοΈ The QuickStart section offers an overview to help you quickly dive into learning about decoding methods.
- This paper list compiles relevant research on decoding methods for both LLMs and LVLMs.
- π‘ About
- β¨ Updates
- π§ QuickStart
- π Papers
- π Citation
In this paper, we survey and categorize research on decoding methods for foundation models along two key dimensions: paradigms and applications. We identify three primary paradigms in recent decoding algorithms for large generative models: contrastive decoding, guided decoding, and parallel decoding.
- π [12/16/2024] Paper list has been released!
- Neurips 2024 Tutorial: Beyond Decoding
- How to generate text: using different decoding methods for language generation with Transformers
- Generating Human-level Text with Contrastive Search in Transformers
- Decoding Strategies in Large Language Models
- CMU Neural Nets for NLP 2021 (18): Advanced Search Algorithms
- CMU Advanced NLP Fall 2024 (22): From Decoding to Meta Generation Inference Time Algorithms for LMs
- UMass CS685 S24 (Advanced NLP) #13: Decoding from language models
- CMU LTI Colloquium: Reasoning with Inference Time Compute
- Speculative Decoding: When Two LLMs are Faster than One
Abbreviation: Conference: Model:
- Comparison of Diverse Decoding Methods from Conditional Language Models
Daphne Ippolito, Reno Kriz, JoΓ£o Sedoc, Maria Kustikova, Chris Callison-Burch [pdf] - On Decoding Strategies for Neural Text Generators
Gian Wiher, Clara Meister, Ryan Cotterell [pdf] - Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, Zhifang Sui [pdf]
- From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, Zaid Harchaoui [pdf]
- Controllable Text Generation for Large Language Models: A Survey Xun Liang, Hanyu Wang, Yezhaohui Wang, Shichao Song, Jiawei Yang, Simin Niu, Jie Hu, Dan Liu, Shunyu Yao, Feiyu Xiong, Zhiyu Li [pdf]
-
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts
Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, Yejin Choi. [pdf], [code] -
Contrastive Decoding: Open-ended Text Generation as Optimization
Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis. [pdf], [code] -
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding
Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Wen-tau Yih. [pdf], [code] -
Speculative Contrastive Decoding
Hongyi Yuan, Keming Lu, Fei Huang, Zheng Yuan, Chang Zhou. [pdf] -
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, Pengcheng He. [pdf], [code] -
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding
Sicong Leng, Hang Zhang, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, Lidong Bing. [pdf], [code] -
ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding
Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao. [pdf], [code] -
Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding
Zheng Zhao, Emilio Monti, Jens Lehmann, Haytham Assem. [pdf] -
Entropy-Based Decoding for Retrieval-Augmented Large Language Models
Zexuan Qiu, Zijing Ou, Bin Wu, Jingjing Li, Aiwei Liu, Irwin King. [pdf] -
Adaptive Contrastive Decoding in Retrieval-Augmented Generation for Handling Noisy Contexts
Youna Kim, Hyuhng Joon Kim, Cheonbok Park, Choonghyun Park, Hyunsoo Cho, Junyeob Kim, Kang Min Yoo, Sang-goo Lee, Taeuk Kim. [pdf] -
Unchosen Experts Can Contribute Too: Unleashing MoE Models' Power by Self-Contrast
Chufan Shi, Cheng Yang, Xinyu Zhu, Jiahao Wang, Taiqiang Wu, Siheng Li, Deng Cai, Yujiu Yang, Yu Meng. [pdf], [code] -
Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models
Souvik Das, Lifeng Jin, Linfeng Song, Haitao Mi, Baolin Peng, Dong Yu. [pdf], [code] -
Mitigating Hallucinations in Large Vision-Language Models with Instruction Contrastive Decoding
Xintong Wang, Jingheng Pan, Liang Ding, Chris Biemann. [pdf] -
IBD: Alleviating Hallucinations in Large Vision-Language Models via Image-Biased Decoding
Lanyun Zhu, Deyi Ji, Tianrun Chen, Peng Xu, Jieping Ye, Jun Liu. [pdf] -
VACoDe: Visual Augmented Contrastive Decoding
Sihyeon Kim, Boryeong Cho, Sangmin Bae, Sumyeong Ahn, Se-Young Yun. [pdf] -
VaLiD: Mitigating the Hallucination of Large Vision Language Models by Visual Layer Fusion Contrastive Decoding
Jiaqi Wang, Yifei Gao, Jitao Sang. [pdf] -
Mitigating Hallucinations in Large Vision-Language Models (LVLMs) via Language-Contrastive Decoding (LCD)
Avshalom Manevich, Reut Tsarfaty. [pdf]
-
NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints
Ximing Lu, Peter West, Rowan Zellers, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi. [pdf] -
FUDGE: Controlled Text Generation With Future Discriminators
Kevin Yang, Dan Klein. [pdf], [code] -
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics
Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A. Smith, Yejin Choi. [pdf] -
Critic-Guided Decoding for Controlled Text Generation
Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung. [pdf], [code] -
NaturalProver: Grounded Mathematical Proof Generation with Language Models
Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, Yejin Choi. [pdf], [code] -
MIL-Decoding: Detoxifying Language Models at Token-Level via Multiple Instance Learning
Xu Zhang, Xiaojun Wan. [pdf] -
Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model
Haikang Deng, Colin Raffel. [pdf] -
Don't throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding
Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, Asli Celikyilmaz. [pdf] -
Planning with Large Language Models for Code Generation
Shun Zhang, Zhenfang Chen, Yikang Shen, Mingyu Ding, Joshua B. Tenenbaum, Chuang Gan. [pdf], [code] -
Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided Decoding
Ailin Deng, Zhirui Chen, Bryan Hooi. [pdf], [code] -
Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought Reasoning
Tinghui Zhu, Kai Zhang, Jian Xie, Yu Su. [pdf], [code] -
A Data-Driven Guided Decoding Mechanism for Diagnostic Captioning
Panagiotis Kaliosis, John Pavlopoulos, Foivos Charalampakos, Georgios Moschovis, Ion Androutsopoulos. [pdf], [code] -
Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided Decoding
Kyungmin Min, Minbeom Kim, Kang-il Lee, Dongryeol Lee, Kyomin Jung. [pdf] -
Self-Introspective Decoding: Alleviating Hallucinations for Large Vision-Language Models
Fushuo Huo, Wenchao Xu, Zhong Zhang, Haozhao Wang, Zhicheng Chen, Peilin Zhao. [pdf], [code] -
Alphazero-like Tree-Search can Guide Large Language Model Decoding and Training
Xidong Feng, Ziyu Wan, Muning Wen, Stephen Marcus McAleer, Ying Wen, Weinan Zhang, Jun Wang. [pdf], [code] -
From Uncertainty to Trust: Enhancing Reliability in Vision-Language Models with Uncertainty-Guided Dropout Decoding
Yixiong Fang, Ziran Yang, Zhaorun Chen, Zhuokai Zhao, Jiawei Zhou. [pdf], [code] -
Monitor-Guided Decoding of Code LMs with Static Analysis of Repository Context
Lakshya A Agrawal, Aditya Kanade, Navin Goyal, Shuvendu K. Lahiri, Sriram K. Rajamani. [pdf], [code] -
SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding
Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bill Yuchen Lin, Radha Poovendran. [pdf], [code] -
Guiding LLMs The Right Way: Fast, Non-Invasive Constrained Generation
Luca Beurer-Kellner, Marc Fischer, Martin Vechev. [pdf], [code]
-
Blockwise Parallel Decoding for Deep Autoregressive Models
Mitchell Stern, Noam Shazeer, Jakob Uszkoreit. [pdf] -
Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation
Heming Xia, Tao Ge, Peiyi Wang, Si-Qing Chen, Furu Wei, Zhifang Sui. [pdf], [code] -
Accelerating Transformer Inference for Translation via Parallel Decoding
Andrea Santilli, Silvio Severino, Emilian Postolache, Valentino Maiorca, Michele Mancusi, Riccardo Marin, Emanuele RodolΓ . [pdf], [code] -
Draft& Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding
Jun Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, Sharad Mehrotra. [pdf], [code] -
Fast Inference from Transformers via Speculative Decoding
Yaniv Leviathan, Matan Kalman, Yossi Matias. [pdf] -
Accelerating Large Language Model Decoding with Speculative Sampling
Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, John Jumper. [pdf] -
DistillSpec: Improving Speculative Decoding via Knowledge Distillation
Yongchao Zhou, Kaifeng Lyu, Ankit Singh Rawat, Aditya Krishna Menon, Afshin Rostamizadeh, Sanjiv Kumar, Jean-François Kagy, Rishabh Agarwal. [pdf] -
SpecInfer: Accelerating Generative Large Language Model Serving with Tree-based Speculative Inference and Verification
Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Zhengxin Zhang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, Chunan Shi, Zhuoming Chen, Daiyaan Arfeen, Reyna Abhyankar, Zhihao Jia. [pdf], [code] -
Online Speculative Decoding
Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Alvin Cheung, Zhijie Deng, Ion Stoica, Hao Zhang. [pdf], [code] -
Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting
Zilong Wang, Zifeng Wang, Long Le, Huaixiu Steven Zheng, Swaroop Mishra, Vincent Perot, Yuwei Zhang, Anush Mattapalli, Ankur Taly, Jingbo Shang, Chen-Yu Lee, Tomas Pfister. [pdf] -
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
Yichao Fu, Peter Bailis, Ion Stoica, Hao Zhang. [pdf], [code] -
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao. [pdf], [code] -
EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty
Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang. [pdf], [code] -
EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees
Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang. [pdf], [code] -
On Speculative Decoding for Multimodal Large Language Models
Mukul Gagrani, Raghavv Goel, Wonseok Jeon, Junyoung Park, Mingu Lee, Christopher Lott. [pdf] -
LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Doohyuk Jang, Sihwan Park, June Yong Yang, Yeonsung Jung, Jihun Yun, Souvik Kundu, Sung-Yub Kim, Eunho Yang. [pdf] -
Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding
Yao Teng, Han Shi, Xian Liu, Xuefei Ning, Guohao Dai, Yu Wang, Zhenguo Li, Xihui Liu. [pdf] -
Superposed Decoding: Multiple Generations from a Single Autoregressive Inference Pass
Ethan Shen, Alan Fan, Sarah M. Pratt, Jae Sung Park, Matthew Wallingford, Sham M. Kakade, Ari Holtzman, Ranjay Krishna, Ali Farhadi, Aditya Kusupati. [pdf], [code] -
SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration
Heming Xia, Yongqi Li, Jun Zhang, Cunxiao Du, Wenjie Li. [pdf], [code]
-
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations
Aryo Pradipta Gema, Chen Jin, Ahmed Abdulaal, Tom Diethe, Philip Teare, Beatrice Alex, Pasquale Minervini, Amrutha Saseendran. [pdf], [code] -
Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful Comparators
Dingkang Yang, Dongling Xiao, Jinjie Wei, Mingcheng Li, Zhaoyu Chen, Ke Li, Lihua Zhang. [pdf] -
Delve into Visual Contrastive Decoding for Hallucination Mitigation of Large Vision-Language Models
Yi-Lun Lee, Yi-Hsuan Tsai, Wei-Chen Chiu. [pdf], [code] -
ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models
Yeji Park, Deokyeong Lee, Junsuk Choe, Buru Chang. [pdf], [code] -
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Chenxi Wang, Xiang Chen, Ningyu Zhang, Bozhong Tian, Haoming Xu, Shumin Deng, Huajun Chen. [pdf], [code] -
CATCH: Complementary Adaptive Token-level Contrastive Decoding to Mitigate Hallucinations in LVLMs
Zhehan Kan, Ce Zhang, Zihan Liao, Yapeng Tian, Wenming Yang, Junyuan Xiao, Xu Li, Dongmei Jiang, Yaowei Wang, Qingmin Liao. [pdf]
-
SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models
Somnath Banerjee, Sayan Layek, Soham Tripathy, Shanu Kumar, Animesh Mukherjee, Rima Hazra. [pdf], [code] -
Adversarial Contrastive Decoding: Boosting Safety Alignment of Large Language Models via Opposite Prompt Optimization
Zhengyue Zhao, Xiaoyun Zhang, Kaidi Xu, Xing Hu, Rui Zhang, Zidong Du, Qi Guo, Yunji Chen. [pdf] -
Adversarial Contrastive Decoding: Boosting Safety Alignment of Large Language Models via Opposite Prompt Optimization
Zhengyue Zhao, Xiaoyun Zhang, Kaidi Xu, Xing Hu, Rui Zhang, Zidong Du, Qi Guo, Yunji Chen. [pdf] -
Root Defence Strategies: Ensuring Safety of LLM at the Decoding Level
Xinyi Zeng, Yuying Shang, Yutao Zhu, Jiawei Chen, Yu Tian. [pdf] -
Probing the Safety Response Boundary of Large Language Models via Unsafe Decoding Path Generation
Haoyu Wang, Bingzhe Wu, Yatao Bian, Yongzhe Chang, Xueqian Wang, Peilin Zhao. [pdf] -
Parameter-Efficient Detoxification with Contrastive Decoding
Tong Niu, Caiming Xiong, Yingbo Zhou, Semih Yavuz. [pdf] -
Transfer Q Star: Principled Decoding for LLM Alignment
Souradip Chakraborty, Soumya Suvra Ghosal, Ming Yin, Dinesh Manocha, Mengdi Wang, Amrit Singh Bedi, Furong Huang. [pdf] -
Decoding Matters: Addressing Amplification Bias and Homogeneity Issue for LLM-based Recommendation
Keqin Bao, Jizhi Zhang, Yang Zhang, Xinyue Huo, Chong Chen, Fuli Feng. [pdf], [code]
-
Contrastive Decoding Improves Reasoning in Large Language Models
Sean O'Brien, Mike Lewis. [pdf] -
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillation
Phuc Phan, Hieu Tran, Long Phan. [pdf], [code] -
Expediting and Elevating Large Language Model Reasoning via Hidden Chain-of-Thought Decoding
Tianqiao Liu, Zui Chen, Zitao Liu, Mi Tian, Weiqi Luo. [pdf] -
SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding
Zhenglin Wang, Jialong Wu, Yilong Lai, Congzhi Zhang, Deyu Zhou. [pdf], [code] -
Chain-of-Thought Reasoning Without Prompting
Xuezhi Wang, Denny Zhou. [pdf] -
Self-Para-Consistency: Improving Reasoning Tasks at Low Cost for Large Language Models
Wenqing Chen, Weicheng Wang, Zhixuan Chu, Kui Ren, Zibin Zheng, Zhichao Lu. [pdf], [code] -
Self-Evaluation Guided Beam Search for Reasoning
Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, Min-Yen Kan, Junxian He, Qizhe Xie. [pdf], [code] -
Learning to Decode Collaboratively with Multiple Language Models
Zejiang Shen, Hunter Lang, Bailin Wang, Yoon Kim, David Sontag. [pdf], [code] -
The Era of Semantic Decoding
Maxime Peyrard, Martin Josifoski, Robert West. [pdf]
-
REST: Retrieval-Based Speculative Decoding
Zhenyu He, Zexuan Zhong, Tianle Cai, Jason Lee, Di He. [pdf], [code] -
Nonparametric Decoding for Generative Retrieval
Hyunji Lee, JaeYoung Kim, Hoyeon Chang, Hanseok Oh, Sohee Yang, Vladimir Karpukhin, Yi Lu, Minjoon Seo. [pdf], [code] -
Planning Ahead in Generative Retrieval: Guiding Autoregressive Generation through Simultaneous Decoding
Hansi Zeng, Chen Luo, Hamed Zamani. [pdf], [code]
-
DOCE: Finding the Sweet Spot for Execution-Based Code Generation
Haau-Sing Li, Patrick Fernandes, Iryna Gurevych, AndrΓ© F.T. Martins. [pdf], [code] -
USCD : Improving Code Generation of LLMs by Uncertainty-Aware Selective Contrastive Decoding
Shuai Wang, Liang Ding, Li Shen, Yong Luo, Zheng He, Wei Yu, Dacheng Tao. [pdf] -
Selective Prompt Anchoring for Code Generation
Yuan Tian, Tianyi Zhang. [pdf], [code] -
DocCGen: Document-based Controlled Code Generation
Sameer Pimparkhede, Mehant Kammakomati, Srikanth G. Tamilselvam, Prince Kumar, Ashok Pon Kumar, Pushpak Bhattacharyya. [pdf], [code] -
Constrained Decoding for Secure Code Generation
Yanjun Fu, Ethan Baker, Yu Ding, Yizheng Chen. [pdf] -
Hot or Cold? Adaptive Temperature Sampling for Code Generation with Large Language Models
Yuqi Zhu, Jia Li, Ge Li, YunFei Zhao, Jia Li, Zhi Jin, Hong Mei. [pdf], [code] -
LEVER: Learning to Verify Language-to-Code Generation with Execution
Ansong Ni, Srini Iyer, Dragomir Radev, Ves Stoyanov, Wen-tau Yih, Sida I. Wang, Xi Victoria Lin. [pdf], [code] -
Decoding Secret Memorization in Code LLMs Through Token-Level Characterization
Yuqing Nie, Chong Wang, Kailong Wang, Guoai Xu, Guosheng Xu, Haoyu Wang. [pdf]
-
Hierarchical Skip Decoding for Efficient Autoregressive Text Generation
Yunqi Zhu, Xuebing Yang, Yuanyuan Wu, Wensheng Zhang. [pdf] -
A Frustratingly Simple Decoding Method for Neural Text Generation
Haoran Yang, Deng Cai, Huayang Li, Wei Bi, Wai Lam, Shuming Shi. [pdf], [code] -
Adaptive Draft-Verification for Efficient Large Language Model Decoding
Xukun Liu, Bowen Lei, Ruqi Zhang, Dongkuan Xu. [pdf]
-
Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding
Yao Teng, Han Shi, Xian Liu, Xuefei Ning, Guohao Dai, Yu Wang, Zhenguo Li, Xihui Liu. [pdf] -
Emage: Non-Autoregressive Text-to-Image Generation
Zhangyin Feng, Runyi Hu, Liangxin Liu, Fan Zhang, Duyu Tang, Yong Dai, Xiaocheng Feng, Jiwei Li, Bing Qin, Shuming Shi. [pdf] -
HART: Efficient Visual Generation with Hybrid Autoregressive Transformer
Haotian Tang, Yecheng Wu, Shang Yang, Enze Xie, Junsong Chen, Junyu Chen, Zhuoyang Zhang, Han Cai, Yao Lu, Song Han. [pdf], [code]
- Mitigating Hallucinations of Large Language Models in Medical Information Extraction via Contrastive Decoding
Derong Xu, Ziheng Zhang, Zhihong Zhu, Zhenxi Lin, Qidong Liu, Xian Wu, Tong Xu, Xiangyu Zhao, Yefeng Zheng, Enhong Chen. [pdf], [code]
-
Grounded Decoding: Guiding Text Generation with Grounded Models for Embodied Agents
Wenlong Huang, Fei Xia, Dhruv Shah, Danny Driess, Andy Zeng, Yao Lu, Pete Florence, Igor Mordatch, Sergey Levine, Karol Hausman, Brian Ichter. [pdf], [code] -
Bidirectional Decoding: Improving Action Chunking via Closed-Loop Resampling
Yuejiang Liu, Jubayer Ibn Hamid, Annie Xie, Yoonho Lee, Maximilian Du, Chelsea Finn. [pdf], [code]
- There are cases where we miss important works in this field, please feel free to contribute and promote your awesome work or other related works here! Thanks for the efforts in advance.
- Release decoding playground on HuggingFace