-
Notifications
You must be signed in to change notification settings - Fork 94
Is this project still actively being maintained? #148
Comments
I previously raised a question in Slack community channel regarding ongoing support for this project. About a month ago, there was a discussion promising continued development and updates. However, I have not seen any recent changes or updates since then. Specifically, I am eager to see support for the new vllm/transformer packages, which are crucial for my current use cases. Could we get an update on the progress towards integrating these packages? Any timeline or roadmap would be greatly appreciated, as it would help us plan our projects accordingly. |
I'm using fastchat previously, and now plan to use vllm and Ray serve for LLM inference, seems it's also working well. |
I am also interested in found fastcaht replacement, but I wonder how to implement model registry, dynamic auto scale, and unique entry URL with Ray? ;) |
I think ray serving ingress can do the mode registry, ray auto scale for scaling, and multiple application deployment may achieve the unique entry URL.
|
@leiwen83 here is the doc about how to run ray serve and autoscaling: For model registry or unique entry URL/ingress, need to take a further look, may need to customize on FastAPI? |
fastapi change may not be enough... For fastchat, it implement controller which track status of all workers, which make registry possible. |
I have upgrade vllm to 0.4.1 in an earlier version in my fork, check the details if you are interested ^_^: https://github.com/OpenCSGs/llm-inference/tree/main/llmserve/backend/llm/engines/vllm |
There is no release for 3 months and just few commits recently, so will this project be actively maintained?
I tried serve using ray-llm with some LLM, and need to update transformers, install tiktoken, update vllm etc... to make it work.
Hopefully, we can take some time to maintain this project, so we can use Ray as a unified framework for data processing, serving, tuning, training.
Thanks and looking forward to your response.
The text was updated successfully, but these errors were encountered: