Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AdapterFusion requires large batch_size? #597

Closed
dyxohjl666 opened this issue Oct 30, 2023 · 0 comments
Closed

AdapterFusion requires large batch_size? #597

dyxohjl666 opened this issue Oct 30, 2023 · 0 comments
Labels
question Further information is requested

Comments

@dyxohjl666
Copy link

Details

When I train AdapterFusion with the default configuration on a summarization task, the training loss suddenly increases after the first epoch and doesn't converge at last.
Due to the resource limitations, I could only train with a batch_size of 1 originally. Then I tried different hyperparameters and finally found that it works normally only when batch_size>=16. I also test batch_size=8, it's not good enough. (I didn't really add the batch_size, but apply Gradient Accumulation).

Just record the details here for someone who may experience the same confusion with me when training AdapterFusion!

@dyxohjl666 dyxohjl666 added the question Further information is requested label Oct 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant