-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Couldn't run evaluate.py
#5
Comments
Thank you for your interest in our work. Yeah unfortunately, it not integrated to huggingface by simply calling its name. We will support this soon. Currently you need to download the weight from the huggingface by git cloning that repo aka downloading the weight locally and specify this model path in the yaml file. |
I have the same problem. |
Thank you for your interest in our work. Unfortunately wget is not the proper way to download from Huggingface data repo. The proper way to do it is as here: #19 (comment) Also make sure your path of weight file is up to the pth ending which is bliva_vicuna7b.pth |
tried to run the evaluate the following example
, and got this error:
Bliva vicuna is defined in
bliva_vicuna7b.yaml
. However,llm_model
checkpoint is not defined. What model do you recommend to use in each case?I tried
llm_model
withmlpc-lab/BLIVA_Vicuna
, but still doesn't work. Any suggestion?The text was updated successfully, but these errors were encountered: