Replies: 1 comment
-
Yes, you're right, there is no way to configure this. I agree that the documentation is confusing in this regard and should be updated.
Could you clarify what you mean by that?
What do you mean by that?
This is the intended usage. As a user, you should not have to create |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The reference https://huggingface.co/docs/peft/v0.11.0/package_reference/p_tuning says "prompt tokens can be added anywhere in the input sequence, and p-tuning also introduces anchor tokens for improving performance", but I don't see any options in the configuration to control this. How can I best handle a multipart input with P-tuning?
It's also confusing that the reference page above shows a PromptEncoder class that uses the PromptEncoderConfig, but the task example https://huggingface.co/docs/peft/main/en/task_guides/ptuning-seq-classification just passes the PromptEncoderConfig directly to get_peft_model. Is there a way of using PromptEncoder for more-flexible handling?
Beta Was this translation helpful? Give feedback.
All reactions