-
Notifications
You must be signed in to change notification settings - Fork 46
Issues: volcengine/verl
Basic Tutorial: Adding a New LLM Inference/Serving Backend
#21
opened Nov 22, 2024 by
PeterSH6
Open
1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Actor model didn't update correctly when upgrade megatron to core-r0.6.0
#64
opened Dec 24, 2024 by
Wodswos
Unexpected Increase in Rollout Time After Reducing num_hidden_layers in deepseek-llm-7b-chat Model
#24
opened Nov 25, 2024 by
metaqiang
Basic Tutorial: Adding a New LLM Inference/Serving Backend
enhancement
New feature or request
generation
#21
opened Nov 22, 2024 by
PeterSH6
Is non-RmPad version model and RmPad verison mdoel interchangeable?
#20
opened Nov 21, 2024 by
yanggthomas
[RFC] Megatron-LM and MCore maintaining issues for veRL
enhancement
New feature or request
megatron
#15
opened Nov 19, 2024 by
PeterSH6
Hangs during vllm rollout, no error message
bug
Something isn't working
vllm related
#12
opened Nov 12, 2024 by
Vamix
Can I run ppo in llama3.1-70B-instruct?
question
Further information is requested
#6
opened Nov 5, 2024 by
cingtiye
ProTip!
What’s not been updated in a month: updated:<2024-12-07.