-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: Improve documentation regarding audio ingestion pipeline to inform the user when to use VAD #431
Comments
Currently checking if
|
Thanks for the suggestions/feedback, responding in order,
|
@rmusser01 thanks for the reply, the first numbered list is reminding myself how to install Docker with Docker, but I would love ya to address the bullet points instead (along hopefully with more documentations and guides). Especially now I would focus on how to get the Docker to recognize that I need Phi-3 SLM (and maybe an embedding model) from Ollama to do summarization. The pipeline tho definitely need more docs along with re-works since people might not have been explicit about their use case or video genre type... should I explain more? |
edit: Ah, I understand what you meant. |
@rmusser01 for using Docker and Ollama on its own, it is fairly easy, however I am facing some weird problem regarding setting which Ollama models to use in TL;DW since there is no such setting in the panel, and that they recommended fixing the config.txt file (which is packed within the file system of Docker, and is sightly less streamlined to use). |
FYI you can edit the config file from the web UI |
(Just a bit of a personal note that could be included in the ReadMe)
Dockerfile
and remove the.txt
extensionThe text was updated successfully, but these errors were encountered: