-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inference examples #1
Comments
Hi @SkalskiP, Thank you for your interest in our work. The run_llava.py scripts from official LLaVA repository along with the edited files we provided in our repository can be used for inference. However, we are planning to release some straight forward inference scripts soon. Stay tuned! |
Hi @SkalskiP, Thank you for your patience, Google Colab demo is now available, check it out, |
Hi @SkalskiP, We have just released the online demo of both Phi-3-V and LLaMA-3-V. Check it out at |
Hi @mmaaz60, thanks for your excellent work, can your provide a inference script base on checkpoints or weights trained by LLaMA3-V_finetune_lora.sh? |
Hi @At1a8, We appreciate your interest in our work. Please note that we also provide the merged weights obtained by merging the LoRA weights with the base LLM. For example, for LLaMA-3 the merged LoRA weights are available at LLaVA-Meta-Llama-3-8B-Instruct. Further, the weights obtained using full fine-tuning are available at LLaVA-Meta-Llama-3-8B-Instruct-FT. We notice that, for LLaMA-3-V, the fully fine-tuned model works better than the LoRA fine-tuned model. The same inference pipeline as in Google Colab can be used for LLaMA-3-V models as well. However, here you have to copy LLaMA-3-V files instead of Phi-3-V and download the LLaMA-3-V model. We hope it will help. Please let us know if you have any questions. Thank You |
Thanks for your reply, we trained our models based on customed dataset, and we want to merge the weights from llama3 and lora weight which trained by your code. How can we do that, could you please give a code example? Thanks so much. |
Hi @At1a8, Thanks for the clarification, you can use the following script to merge LoRA weights after training.
Here I hope it will help. Please let me know if you face any issues. Good Luck! |
Hi @At1a8, We have just added the merge_lora_weights.py script that will be helpful to merge the LoRA weights. Please let us know if you have any questions. Good Luck! |
@mmaaz60 thanks a lot! I'll make sure to play with it ;) |
We have trained this scripts to get checkpoints
and used merged script mentioned above and have following logs:
We still cannot access to meta-llama/Meta-Llama-3-8B-Instruct and we use Undi95/Meta-Llama-3-8B-Instruct-hf instead, and we encounted this warning:
Why does this warning appear and do we have a someway to solve it? |
Hi @At1a8 This warning is normal. During merging, we first try to load base LLM checkpoints to our Visual-LLM class that do not have projector weights. However later we load LoRA and additional weights that contain the projector weights as well. In summary, this warning is normal and you can ignore it and proceed. Good Luck! |
update image process func in cli.py
Hi 👋🏻 Do you have any inference examples that I could use?
The text was updated successfully, but these errors were encountered: