Replies: 3 comments 3 replies
-
Hi @philippbb , |
Beta Was this translation helpful? Give feedback.
-
Hi @Svito-zar Thank you very much for your answer. My dataset is basically a 12s long. During this time (around 2s to 5s) the actor specifically says top and reaches with his hand from an almost A pose to point with his right hand to his chest to point out something on her cloth. Training this dataset and then use the same file with the gesture predictor via the demo folder, I was expecting at some point the skeleton would do a similar kind of pointing gesture, but all I get is the movement below. And this is one of the better results I got. temp.mp4The dataset is in japanese and I used Japanese Bert model 'cl-tohoku/bert-base-japanese' . As the initial version of Gesticulator was in japanese, I thought this woulnd't be the source of the problem. Moreover: In the original bvh the left hand constantly is holding a microphone pointing to the head. |
Beta Was this translation helpful? Give feedback.
-
Hmm ... training on 12s of data is quite extreme. Normally a model should be able to overfit to a small dataset, but 12s is not small, it is tiny :) Did you manage to reproduce the results with the original dataset: Trinity-College Speech-Gesture dataset? |
Beta Was this translation helpful? Give feedback.
-
Would it be possible to receive some advice how to train/predict the gesticulator on a smaller data set?
I just tried with one bvh/audio/json around 15s, but mostly prediction step via the prediction model from demo folder doesnt give me anything I expected it to be.
For example if I use the same audio and text I used for training only the first 2s or so the skeleton is doing something with the arms, then stands still.
Thank you very much in advance.
Beta Was this translation helpful? Give feedback.
All reactions