Replies: 9 comments 4 replies
-
First and foremost, I extend my heartfelt gratitude for the outstanding contribution to the llm project. |
Beta Was this translation helpful? Give feedback.
-
depends of your goals you already inspired many people by example! |
Beta Was this translation helpful? Give feedback.
-
Wow, that's wild. When I first started, But I haven't been in the space for a few months. If another library's come out with all the same features, I wouldn't be too mad at having to use the new one. I think it's better when we all contribute to the same project rather than splitting our efforts. So if you're already slowing down, I think it's fair to recommend the other. YMMV. I'm just grateful that this project was around when I needed it. Thanks! |
Beta Was this translation helpful? Give feedback.
-
I think you should decide how you want to spend your spare time. But I want to leave a big thanks If you feel that candle is going to the right direction and pushes much faster ahead, there is no shame in directing people towards that project. Especially if they got a full time dev behind it and you don't feel like trying to catch up. |
Beta Was this translation helpful? Give feedback.
-
I voted Other as I believe we must consider feedback from the Candle implementers before making our decision. |
Beta Was this translation helpful? Give feedback.
-
for me, the biggest win with using
|
Beta Was this translation helpful? Give feedback.
-
I think independent (non corporate backed) libraries are very important and worth working on, even at a slower pace. Competition is good in open source too. |
Beta Was this translation helpful? Give feedback.
-
Hi, thanks for the amazing project. I have some experience with the open-source AI project. And I'd like to share some thoughts here. Here are more ML frameworks in Rust, like you mentioned candle, burn-rs, etc. They can support many of the LLMs. And these projects have their own goal. Please think about it, why do we need a new framework since we already have PyTorch? And the answers are quite different due to the different projects. So, here is my thought. What is the roadmap or goal of the rusinfomer/llm? If you want to build an open-source AI ecosystem with Rust. I believe it should be continued. And support other ML frameworks step by step. And I believe there are gaps between those frameworks and the end-user. And projects like rusinfomer/llm can help with this. Although this is a long story, being more patient is okay. For example, I am trying to embed burn as a backend to some project that only has 0.1% in the past 2-3 weeks. I believe we are fighting for the open-AI technology not for maintaining the API or the CLI (inspired by other maintainers, not mine). So, as I mentioned before. You can stop a little bit to think more about the goal of the project. By the way. I believe I can make some contributions after I am familiar with Rust ML frameworks. So, keep fighting. |
Beta Was this translation helpful? Give feedback.
-
Thanks to everyone for the insight, feedback, and motivation. After reading through what everyone's said, talking to a few users, and chatting with Laurent regarding Candle and candle-transformers, I'm happy to state that the project will go on 🚀 To list the reasons I came to this decision:
That being said, I'm looking forward to a harmonious relationship with Candle - I would love to implement Candle as an alternate backend (#31), as it'll enable easy use of LLMs on platforms where GGML may not be ideal or available, as well as simplifying the build. We'll see what the future has in store for us 🙂 Thanks again to everyone weighing in - your support was crucial to this decision! |
Beta Was this translation helpful? Give feedback.
-
Development of
llm
hasn’t gone as quickly as I would have liked. I have been very busy with work over the last few months, and keeping up with the pace of development in the space has been quite difficult.There is no better demonstration of this than the GGUF implementation work - after writing up the spec and getting everyone to agree on it, I haven’t had much energy to work on actually implementing it (#412), especially in accounting for how the tokenizer has changed and handling the other model architectures. I’ve been fixing things up here and there, but it’s still a while off.
All of that would be fine - we could always catch up - if it weren’t for the existence of Candle, or more specifically, candle-transformers. Like its Python namesake, candle-transformers contains implementations of common models, including the majority of, if not all of, the models we support.
My plan was to always support Candle as an additional, or replacement, backend. However, I did not account for Candle itself implementing these models and including both support for GGUF/GGML models and quantisation in such a short timeframe, allowing it to more or less cover the same territory as we do. Additionally, it enjoys a dedicated full-time developer who can dedicate their time to progressing Candle much more expediently than any one of us can progress
llm
.Given that, I’m wondering if we still need to be around. I’m not convinced that a library that is neither bleeding edge (llama.cpp) or ecosystem-native (Candle) can carve out its own unique place in the Rust ML ecosystem.
I’ve heard some suggestions that we could offer a more ergonomic interface on top of CT, but I suspect that effort would be better spent on making CT more ergonomic in itself.
I’ve created a poll with a few options, but I’d love to hear any detailed thoughts people have, reports from production use of the library, or reasons why
llm
should stick around.In any case, I appreciate all of our users - I love seeing what people have done with the library! - and hope that we have been of use to you. Thanks for sticking around!
35 votes ·
Beta Was this translation helpful? Give feedback.
All reactions