Replies: 1 comment
-
I collect information about "promptless training" and "prompt dropping" here https://civitai.com/articles/2078#heading-3617 -> Q: Why does a control net need captions or why would you drop captions? you might also consider using automated caption generation like BLIP. there might even be specialized ones for characters https://huggingface.co/docs/transformers/model_doc/blip-2 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have currently completed the initial training of the SD2.1 canny model and am working on the training of Pose.
I currently have an accurate annotated training data and a large amount of unannotated data,I currently intend to use only the full body of unlabelled data to auto-labelling, and I'm wondering if this has serious implications for practical use after training? Other than that auto-labeling seems to be less accurate, sometimes even a mess, and I was wondering if there is a high percentage of this in your dataset?
less accurate:
mess:
PS: this pose estimator is for Illustrated character
Beta Was this translation helpful? Give feedback.
All reactions