Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How did you get position? #4

Open
jtragtenberg opened this issue Jun 15, 2015 · 4 comments
Open

How did you get position? #4

jtragtenberg opened this issue Jun 15, 2015 · 4 comments

Comments

@jtragtenberg
Copy link
Collaborator

I am reading you code and still I couldn't figure out how you managed to get the position on stage from the actors using only joint angles...
Do you use any other external sensors like cameras or a kinect?

thanks!

@itotaka
Copy link
Collaborator

itotaka commented Jun 16, 2015

No, sorry, currently MOTIONER doesn't get absolute positions of joints, body.
Once we tried to combine a camera or Kinects with it, but still need some developments..

@jtragtenberg
Copy link
Collaborator Author

Then how did you do the videos you published where the avatars move around in space?
All the videos you have that happens, and is only possible if you have the position captured. If you got just relative positions the avatar would move as if it was in a slippery floor, no?

@itotaka
Copy link
Collaborator

itotaka commented Jun 16, 2015

There're 2 possibilities.

  1. We borrowed an optical motion capture system for experiments. Some of the videos and sample data are from them.
  2. MOTIONER application simulates walking roughly by fixing the lowest joint position. The data from MOTIONER sensors itself is, yes, slippery on the floor.

mmmmm, I'm sorry for your confusion, I'll add the description about this.

@jtragtenberg
Copy link
Collaborator Author

Great, that was what I was looking for.
I beleive videos like https://vimeo.com/61942488 show the dancers reacting with each other wearing motioners, so its probably the second alternative.
Is this lowest joint position fixing done in RAM Dance Toolkit? The OSC data sent from the mbed is only the quaternions of each joint?

And I sent you an email to taka@ycam.jp, is that your email?
beacause I'm researching on my masters how to create music from dance, and my plans are to use motioner to generate music. I would like to know more about the RAM project, and about YCAM for my research.
could I send you some questions about the process?
Could you pass me the email of the other people involved in this project so I could interview them too?

my email is tragtenberg@gmail.com

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants