Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduction of metrics for the ForInstance dataset in the paper results #10

Open
zqalex opened this issue Apr 11, 2024 · 5 comments
Open

Comments

@zqalex
Copy link

zqalex commented Apr 11, 2024

Thanks for your great work! But there are some confused questions.

First, I directly used the command you mentioned in the readme, python train.py task=panoptic data=panoptic/treeins_rad8 models=panoptic/area4_ablation_3heads_5 model_name=PointGroup-PAPER training=treeins job_name=treeins_my_first_run. However, the results show that in the train.log, the test mIoU at epoch 149 is only 90.44 and the test F1 score is 0.72, while in the eval.log, the test mIoU is only 90.41 and the test F1 score is 0.74.

I used the default parameters provided in the GitHub repository for the run, and the data is also same, so I am curious about how the parameters were set in your paper to achieve an mIoU of 97.2% for the ForInstance dataset?

Second, when I tried to run eval.py based on the PointGroup-PAPER.pt you provided, a mistake happened:

File "eval.py", line 13, in main
trainer = Trainer(cfg)
File "/XXX/torch_points3d/trainer.py", line 48, in init
self._initialize_trainer()
File "/XXX/torch_points3d/trainer.py", line 90, in _initialize_trainer
self._dataset: BaseDataset = instantiate_dataset(self._checkpoint.data_config)
File "/XXX/torch_points3d/datasets/dataset_factory.py", line 46, in instantiate_dataset
dataset = dataset_cls(dataset_config)
File "/XXX/torch_points3d/datasets/panoptic/treeins.py", line 629, in init
self.test_dataset = dataset_cls(
File "/XXX/torch_points3d/datasets/segmentation/treeins.py", line 495, in init
super().init(root, grid_size, *args, **kwargs)
File "XXX/torch_points3d/datasets/segmentation/treeins.py", line 189, in init
super(TreeinsOriginalFused, self).init(root, transform, pre_transform, pre_filter)
File "XXX/lib/python3.8/site-packages/torch_geometric/data/in_memory_dataset.py", line 60, in init
super().init(root, transform, pre_transform, pre_filter)
File "XXX/lib/python3.8/site-packages/torch_geometric/data/dataset.py", line 83, in init
self._download()
File "XXX/lib/python3.8/site-packages/torch_geometric/data/dataset.py", line 136, in _download
if files_exist(self.raw_paths): # pragma: no cover
File "XXX/lib/python3.8/site-packages/torch_geometric/data/dataset.py", line 125, in raw_paths
files = to_list(self.raw_file_names)
File "XXX/torch_points3d/datasets/segmentation/treeins.py", line 233, in raw_file_names
for region in self.forest_regions:
TypeError: 'NoneType' object is not iterable

But as I mentioned above, my data has been preprocessed by you previously, and the settings are the same as those mentioned in your GitHub repository. So, could this issue be due to some differences in the dataset?

Third, after running eval.py, the result files only include Instance_results_withColor_X.ply and vote1regular.ply_X.ply. However, these files cannot be used as inputs for evaluation_stats_FOR.py because they lack the 'pre' and 'gt' attributes. So, I am wondering if there's a need for some processing steps between the results from eval.py and evaluation_stats_FOR.py. If so, could you please inform me about it?

Looking forward to your reply.

@bxiang233
Copy link
Collaborator

Hi,

Did you manage to download the FOR-instance data from the following link?
Updated! The FOR-instance dataset is now open. You can download it from this link: https://polybox.ethz.ch/index.php/s/wVBlHgH308GRr1c and place it in the correct location as described here: [GitHub - Data Folder Structure.](https://github.com/prs-eth/PanopticSegForLargeScalePointCloud?tab=readme-ov-file#data-folder-structure-1)

Regarding the parameters, they are detailed in our paper. I will check again to see if the default parameters provided in the GitHub repository remain unchanged. If you have time, it would be great if you could also verify this.

For the third issue, I have updated the file "torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py". It should now generate the files needed for evaluation_stats_FOR.py. Please give it a try and see if it helps.

Additionally, you might find some updates in this new Git link (related to our new paper) for your reference: https://github.com/bxiang233/ForAINet.

Hope this helps!

@zqalex
Copy link
Author

zqalex commented Apr 12, 2024

Hi,

Did you manage to download the FOR-instance data from the following link? Updated! The FOR-instance dataset is now open. You can download it from this link: https://polybox.ethz.ch/index.php/s/wVBlHgH308GRr1c and place it in the correct location as described here: [GitHub - Data Folder Structure.](https://github.com/prs-eth/PanopticSegForLargeScalePointCloud?tab=readme-ov-file#data-folder-structure-1)

Regarding the parameters, they are detailed in our paper. I will check again to see if the default parameters provided in the GitHub repository remain unchanged. If you have time, it would be great if you could also verify this.

For the third issue, I have updated the file "torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py". It should now generate the files needed for evaluation_stats_FOR.py. Please give it a try and see if it helps.

Additionally, you might find some updates in this new Git link (related to our new paper) for your reference: https://github.com/bxiang233/ForAINet.

Hope this helps!

The above issue has been resolved. Although the miou and F1 in train.log and eval.log did not reach the values mentioned in the article, after successfully running evaluation_stats_FOR.py, the overall metric has reached the level discussed in the article. Thank you for your timely response.

Additionally, regarding the recent modification you made in torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py, I believe there is a bug on line 665, which should be:
self.dataset.to_eval_ply(
test_area_i.pos,
full_ins_pred.numpy(), #[-1, ...]
test_area_i.instance_labels, #[0, ..]
"Instance_Results_forEval{}.ply".format(i),
)

@bxiang233
Copy link
Collaborator

Hi,
Did you manage to download the FOR-instance data from the following link? Updated! The FOR-instance dataset is now open. You can download it from this link: https://polybox.ethz.ch/index.php/s/wVBlHgH308GRr1c and place it in the correct location as described here: [GitHub - Data Folder Structure.](https://github.com/prs-eth/PanopticSegForLargeScalePointCloud?tab=readme-ov-file#data-folder-structure-1)
Regarding the parameters, they are detailed in our paper. I will check again to see if the default parameters provided in the GitHub repository remain unchanged. If you have time, it would be great if you could also verify this.
For the third issue, I have updated the file "torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py". It should now generate the files needed for evaluation_stats_FOR.py. Please give it a try and see if it helps.
Additionally, you might find some updates in this new Git link (related to our new paper) for your reference: https://github.com/bxiang233/ForAINet.
Hope this helps!

The above issue has been resolved. Although the miou and F1 in train.log and eval.log did not reach the values mentioned in the article, after successfully running evaluation_stats_FOR.py, the overall metric has reached the level discussed in the article. Thank you for your timely response.

Additionally, regarding the recent modification you made in torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py, I believe there is a bug on line 665, which should be: self.dataset.to_eval_ply( test_area_i.pos, full_ins_pred.numpy(), #[-1, ...] test_area_i.instance_labels, #[0, ..] "Instance_Results_forEval{}.ply".format(i), )

Hi, great! Thank you for pointing out this bug. Fix it now.

@zqalex
Copy link
Author

zqalex commented Apr 22, 2024

Hi,
Did you manage to download the FOR-instance data from the following link? Updated! The FOR-instance dataset is now open. You can download it from this link: https://polybox.ethz.ch/index.php/s/wVBlHgH308GRr1c and place it in the correct location as described here: [GitHub - Data Folder Structure.](https://github.com/prs-eth/PanopticSegForLargeScalePointCloud?tab=readme-ov-file#data-folder-structure-1)
Regarding the parameters, they are detailed in our paper. I will check again to see if the default parameters provided in the GitHub repository remain unchanged. If you have time, it would be great if you could also verify this.
For the third issue, I have updated the file "torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py". It should now generate the files needed for evaluation_stats_FOR.py. Please give it a try and see if it helps.
Additionally, you might find some updates in this new Git link (related to our new paper) for your reference: https://github.com/bxiang233/ForAINet.
Hope this helps!

The above issue has been resolved. Although the miou and F1 in train.log and eval.log did not reach the values mentioned in the article, after successfully running evaluation_stats_FOR.py, the overall metric has reached the level discussed in the article. Thank you for your timely response.
Additionally, regarding the recent modification you made in torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py, I believe there is a bug on line 665, which should be: self.dataset.to_eval_ply( test_area_i.pos, full_ins_pred.numpy(), #[-1, ...] test_area_i.instance_labels, #[0, ..] "Instance_Results_forEval{}.ply".format(i), )

Hi,
Did you manage to download the FOR-instance data from the following link? Updated! The FOR-instance dataset is now open. You can download it from this link: https://polybox.ethz.ch/index.php/s/wVBlHgH308GRr1c and place it in the correct location as described here: [GitHub - Data Folder Structure.](https://github.com/prs-eth/PanopticSegForLargeScalePointCloud?tab=readme-ov-file#data-folder-structure-1)
Regarding the parameters, they are detailed in our paper. I will check again to see if the default parameters provided in the GitHub repository remain unchanged. If you have time, it would be great if you could also verify this.
For the third issue, I have updated the file "torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py". It should now generate the files needed for evaluation_stats_FOR.py. Please give it a try and see if it helps.
Additionally, you might find some updates in this new Git link (related to our new paper) for your reference: https://github.com/bxiang233/ForAINet.
Hope this helps!

The above issue has been resolved. Although the miou and F1 in train.log and eval.log did not reach the values mentioned in the article, after successfully running evaluation_stats_FOR.py, the overall metric has reached the level discussed in the article. Thank you for your timely response.
Additionally, regarding the recent modification you made in torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py, I believe there is a bug on line 665, which should be: self.dataset.to_eval_ply( test_area_i.pos, full_ins_pred.numpy(), #[-1, ...] test_area_i.instance_labels, #[0, ..] "Instance_Results_forEval{}.ply".format(i), )

I'm also getting the "TypeError: 'NoneType' object is not iterable" error, and I downloaded the FOR-instance dataset and placed it in the correct location based on the author's answer, but I still get this issue, how did you fix it

After I successfully ran evaluation_stats_FOR.py to get proper evaluation results, I didn't focus on the ‘NoneType’ problem. I'm sorry there's nothing I can do to help you.

@bxiang233
Copy link
Collaborator

bxiang233 commented May 2, 2024

Hi,
Did you manage to download the FOR-instance data from the following link? Updated! The FOR-instance dataset is now open. You can download it from this link: https://polybox.ethz.ch/index.php/s/wVBlHgH308GRr1c and place it in the correct location as described here: [GitHub - Data Folder Structure.](https://github.com/prs-eth/PanopticSegForLargeScalePointCloud?tab=readme-ov-file#data-folder-structure-1)
Regarding the parameters, they are detailed in our paper. I will check again to see if the default parameters provided in the GitHub repository remain unchanged. If you have time, it would be great if you could also verify this.
For the third issue, I have updated the file "torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py". It should now generate the files needed for evaluation_stats_FOR.py. Please give it a try and see if it helps.
Additionally, you might find some updates in this new Git link (related to our new paper) for your reference: https://github.com/bxiang233/ForAINet.
Hope this helps!

The above issue has been resolved. Although the miou and F1 in train.log and eval.log did not reach the values mentioned in the article, after successfully running evaluation_stats_FOR.py, the overall metric has reached the level discussed in the article. Thank you for your timely response.
Additionally, regarding the recent modification you made in torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py, I believe there is a bug on line 665, which should be: self.dataset.to_eval_ply( test_area_i.pos, full_ins_pred.numpy(), #[-1, ...] test_area_i.instance_labels, #[0, ..] "Instance_Results_forEval{}.ply".format(i), )

Hi,
Did you manage to download the FOR-instance data from the following link? Updated! The FOR-instance dataset is now open. You can download it from this link: https://polybox.ethz.ch/index.php/s/wVBlHgH308GRr1c and place it in the correct location as described here: [GitHub - Data Folder Structure.](https://github.com/prs-eth/PanopticSegForLargeScalePointCloud?tab=readme-ov-file#data-folder-structure-1)
Regarding the parameters, they are detailed in our paper. I will check again to see if the default parameters provided in the GitHub repository remain unchanged. If you have time, it would be great if you could also verify this.
For the third issue, I have updated the file "torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py". It should now generate the files needed for evaluation_stats_FOR.py. Please give it a try and see if it helps.
Additionally, you might find some updates in this new Git link (related to our new paper) for your reference: https://github.com/bxiang233/ForAINet.
Hope this helps!

The above issue has been resolved. Although the miou and F1 in train.log and eval.log did not reach the values mentioned in the article, after successfully running evaluation_stats_FOR.py, the overall metric has reached the level discussed in the article. Thank you for your timely response.
Additionally, regarding the recent modification you made in torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py, I believe there is a bug on line 665, which should be: self.dataset.to_eval_ply( test_area_i.pos, full_ins_pred.numpy(), #[-1, ...] test_area_i.instance_labels, #[0, ..] "Instance_Results_forEval{}.ply".format(i), )

I'm also getting the "TypeError: 'NoneType' object is not iterable" error, and I downloaded the FOR-instance dataset and placed it in the correct location based on the author's answer, but I still get this issue, how did you fix it

Hi, someone met the same problem, and we solved it by:

  1. rename the folder "data" with "data_outpointsrm"
  2. change all the "data" to "data_outpointsrm" on Line 22 in file "PanopticSegForLargeScalePointCloud/conf/eval.yaml"
    fold:['/path/to/project/PanopticSegForLargeScalePointCloud/data/treeinsfused/raw/CULS/CULS_plot_2_annotated_test.ply', '/path/........

to

to fold:['/path/to/project/PanopticSegForLargeScalePointCloud/data_outpointsrm/treeinsfused/raw/CULS/CULS_plot_2_annotated_test.ply', '/path/........
3. change (Line 227 in file "PanopticSegForLargeScalePointCloud/torch_points3d/datasets/segmentation/treeins.py"):
if self.forest_regions == []: # @treeins: get all data file names in folder self.raw_dir
return glob.glob(self.raw_dir + '/**/*.ply', recursive=True)

to

if self.forest_regions == [] or self.forest_regions ==None: # @treeins: get all data file names in folder self.raw_dir
return glob.glob(self.raw_dir + '/**/*.ply', recursive=True)

Hope it helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants