You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is a new adapter called LLaMA-Adapter, a lightweight adaption method for fine-tuning instruction-following LLaMA models fire, using 52K data provided by Stanford Alpaca.
Open source status
The model implementation is available in the github repo
The model weights are partially available: Variants of LLaMa are available, e.g. gpt4allGPTQ-for-LLaMa. The weights LLaMA-Adapter aren't available.
any updates related to this enhancement? I think llama adapter is really influential with more than 5k stars and such enhancement will be very useful. 😃
There is a new adapter called LLaMA-Adapter, a lightweight adaption method for fine-tuning instruction-following LLaMA models fire, using 52K data provided by Stanford Alpaca.
Open source status
The text was updated successfully, but these errors were encountered: