Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gen_joint_data.py #6

Open
itisthoughttobemaeri opened this issue Feb 5, 2023 · 1 comment
Open

gen_joint_data.py #6

itisthoughttobemaeri opened this issue Feb 5, 2023 · 1 comment

Comments

@itisthoughttobemaeri
Copy link

Hello,

Thanks for publishing the code of your research. While trying to reproduce the results, the command python3 gen_joint_data.py gets stuck at the element #1370/35208 with the following error:

python3 gen_joint_data.py
xsub train
4%|██▍ | 1370/35208 [00:29<11:56, 47.21it/s]
Traceback (most recent call last):
File "/Users/mari/Desktop/ST-GCN-master/data_gen/gen_joint_data.py", line 164, in
gendata(
File "/Users/mari/Desktop/ST-GCN-master/data_gen/gen_joint_data.py", line 140, in gendata
data = read_xyz(os.path.join(data_path, s), max_body=max_body_kinect, num_joint=num_joint)
File "/Users/mari/Desktop/ST-GCN-master/data_gen/gen_joint_data.py", line 77, in read_xyz
seq_info = read_skeleton_filter(file)
File "/Users/mari/Desktop/ST-GCN-master/data_gen/gen_joint_data.py", line 28, in read_skeleton_filter
skeleton_sequence['numFrame'] = int(f.readline())
ValueError: invalid literal for int() with base 10: '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0

I was wondering if this had something to do with the samples_missing_skeletons.txt file.
I got the dataset from the ROSE lab (https://rose1.ntu.edu.sg/dataset/actionRecognition/).

Thanks in advance

@itskalvik
Copy link
Owner

Perhaps it's a bad data file, I didn't add enough sanity checks in my code to handle such things. Try deleting that one file or use a different subset of the dataset. Hope that helps :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants