Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms
-
Updated
Jun 9, 2023 - C++
Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms
L2O/NCO codes from CIAM Group at SUSTech, Shenzhen, China
MetaBox: A Benchmark Platform for Meta-Black-Box Optimization with Reinforcement Learning (https://proceedings.neurips.cc/paper_files/paper/2023/hash/232eee8ef411a0a316efa298d7be3c2b-Abstract-Datasets_and_Benchmarks.html)
[ICML 2024] "MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts"
[ICML 2023] "Towards Omni-generalizable Neural Methods for Vehicle Routing Problems"
This repo implements our paper, "Learning to Search Feasible and Infeasible Regions of Routing Problems with Flexible Neural k-Opt", which has been accepted at NeurIPS 2023.
A collection of MetaBBO papers and code sources
[NeurIPS 2020 Spotlight Oral] "Training Stronger Baselines for Learning to Optimize", Tianlong Chen*, Weiyi Zhang*, Jingyang Zhou, Shiyu Chang, Sijia Liu, Lisa Amini, Zhangyang Wang
Optim4RL is a Jax framework of learning to optimize for reinforcement learning.
[ICLR 2022] "Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How" by Yuning You, Yue Cao, Tianlong Chen, Zhangyang Wang, Yang Shen
[ICML 2023] "Learning to Optimize Differentiable Games" by Xuxi Chen, Nelson Vadori, Tianlong Chen, Zhangyang Wang
Operator splitting can be used to design easy-to-train models for predict-and-optimize tasks, which scale effortlessly to problems with thousands of variables.
[ICLR 2023] "M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast Self-Adaptation" by Junjie Yang, Xuxi Chen, Tianlong Chen, Zhangyang Wang, Yingbin Liang
[ECCV 2022] "Scalable Learning to Optimize: A Learned Optimizer Can Train Big Models" by Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Chen, Ahmed Awadallah, and Zhangyang Wang
[ICLR 2022] "Optimizer Amalgamation" by Tianshu Huang, Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang
TF2 implementation of Learning to learn by gradient descent by gradient descent
Deeply Learned Robust Matrix Completion for Large-scale Low-rank Data Recovery
Add a description, image, and links to the learning-to-optimize topic page so that developers can more easily learn about it.
To associate your repository with the learning-to-optimize topic, visit your repo's landing page and select "manage topics."