Skip to content

Request for Assistance in Training LLMs Using RAG for Educational Chatbots #4287

Answered by rysweet
Alisheikhalii asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @Alisheikhalii,
Thanks for you post!

In the end there are multiple concerns being discussed here. First, trying to get the model to give more accurate answers by augmenting its prompts with a dataset (RAG) concerns grounding the model, which is different from training a model (building the model waits through a computational process) or fine-tuning the model (adjusting the model weights based on your data). Most likely you can accomplish what you want with advanced LLMs such as GPT4o and RAG - without specific fine-tuning. For smaller models that may not be true. Now - the language models are known for not being great at math - they are after all stochastic vs deterministic. For logic …

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@Alisheikhalii
Comment options

Answer selected by Alisheikhalii
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants