Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: datasets should not be an empty iterable #51

Open
Crescent-cc opened this issue Nov 17, 2024 · 2 comments
Open

AssertionError: datasets should not be an empty iterable #51

Crescent-cc opened this issue Nov 17, 2024 · 2 comments

Comments

@Crescent-cc
Copy link

你好,我在链接数据集后,运行预测代码,出现这个报错,请问是我的文件夹格式或者用的数据集有问题吗

@wyf2020
Copy link
Contributor

wyf2020 commented Nov 17, 2024

你好,看起来是数据集的下载或软链接有问题,你可以提供链接数据集和运行预测代码的命令,以及更完整的报错来帮助判断问题。

@Crescent-cc
Copy link
Author

感谢回复,我是按照教程配置了环境,再将textdata文件夹中的两个数据集连接到data文件夹中对应的test文件夹中,代码分别是
ln -s /home/user/EfficientLoFTR-main/testdata/scannet_test_1500/* /home/user/EfficientLoFTR-main/data/scannet/test
ln -s /home/user/EfficientLoFTR-main/testdata/megadepth_test_1500/* /home/user/EfficientLoFTR-main/data/megadepth/test
之后运行了bash scripts/reproduce_test/indoor_full_time.sh,显示报错:
(eloftr) root:~/EfficientLoFTR-main# bash scripts/reproduce_test/indoor_opt_time.sh
{'accelerator': 'ddp',
'accumulate_grad_batches': 1,
'amp_backend': 'native',
'amp_level': 'O2',
'auto_lr_find': False,
'auto_scale_batch_size': False,
'auto_select_gpus': False,
'batch_size': 1,
'benchmark': True,
'check_val_every_n_epoch': 1,
'checkpoint_callback': True,
'ckpt_path': 'weights/eloftr_outdoor.ckpt',
'data_cfg_path': 'configs/data/scannet_test_1500.py',
'default_root_dir': None,
'deter': False,
'deterministic': False,
'distributed_backend': None,
'dump_dir': 'dump/eloftr_full_scannet',
'fast_dev_run': False,
'flash': False,
'flush_logs_every_n_steps': 100,
'fp32': False,
'gpus': -1,
'gradient_clip_algorithm': 'norm',
'gradient_clip_val': 0.0,
'half': False,
'limit_predict_batches': 1.0,
'limit_test_batches': 1.0,
'limit_train_batches': 1.0,
'limit_val_batches': 1.0,
'log_every_n_steps': 50,
'log_gpu_memory': None,
'logger': True,
'main_cfg_path': 'configs/loftr/eloftr_optimized.py',
'max_epochs': None,
'max_steps': None,
'max_time': None,
'megasize': None,
'min_epochs': None,
'min_steps': None,
'move_metrics_to_cpu': False,
'multiple_trainloader_mode': 'max_size_cycle',
'npe': False,
'num_nodes': 1,
'num_processes': 1,
'num_sanity_val_steps': 2,
'num_workers': 4,
'overfit_batches': 0.0,
'pixel_thr': None,
'plugins': None,
'precision': 32,
'prepare_data_per_node': True,
'process_position': 0,
'profiler': None,
'profiler_name': 'inference',
'progress_bar_refresh_rate': None,
'ransac': None,
'ransac_times': 1,
'reload_dataloaders_every_epoch': False,
'replace_sampler_ddp': True,
'resume_from_checkpoint': None,
'rmbd': 1,
'scannetX': 640,
'scannetY': 480,
'stochastic_weight_avg': False,
'sync_batchnorm': False,
'terminate_on_nan': False,
'thr': 20.0,
'tpu_cores': None,
'track_grad_norm': -1,
'truncated_bptt_steps': None,
'val_check_interval': 1.0,
'weights_save_path': None,
'weights_summary': 'top'}
Global seed set to 66
2024-11-18 12:09:54.169 | INFO | main::128 - Args and config initialized!
/home/user/EfficientLoFTR-main/src/lightning/lightning_loftr.py:63: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(pretrained_ckpt, map_location='cpu')['state_dict']
2024-11-18 12:09:55.181 | INFO | src.lightning.lightning_loftr:init:65 - Load 'weights/eloftr_outdoor.ckpt' as pretrained checkpoint
2024-11-18 12:09:55.182 | INFO | main::133 - LoFTR-lightning initialized!
2024-11-18 12:09:55.183 | INFO | main::137 - DataModule initialized!
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
2024-11-18 12:09:55.277 | INFO | main::142 - Start testing!
Global seed set to 66
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1
2024-11-18 12:09:55.610 | INFO | src.lightning.data:setup:125 - [rank:0] world_size: 1
2024-11-18 12:09:55.610 | INFO | src.lightning.data:_setup_dataset:216 - [rank 0]: 0 scene(s) assigned.
[rank:0] loading test datasets: 0it [00:00, ?it/s]
[rank0]: Traceback (most recent call last):
[rank0]: File "./test.py", line 143, in
[rank0]: trainer.test(model, datamodule=data_module, verbose=False)
[rank0]: File "/usr/local/iCompute/envs/eloftr/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 579, in test
[rank0]: results = self._run(model)
[rank0]: File "/usr/local/iCompute/envs/eloftr/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 713, in _run
[rank0]: self.call_setup_hook(model) # allow user to setup lightning_module in accelerator environment
[rank0]: File "/usr/local/iCompute/envs/eloftr/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1159, in call_setup_hook
[rank0]: self.datamodule.setup(stage=fn)
[rank0]: File "/usr/local/iCompute/envs/eloftr/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 385, in wrapped_fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/home/user/EfficientLoFTR-main/src/lightning/data.py", line 190, in setup
[rank0]: self.test_dataset = self._setup_dataset(
[rank0]: File "/home/user/EfficientLoFTR-main/src/lightning/data.py", line 221, in _setup_dataset
[rank0]: return dataset_builder(data_root, local_npz_names, split_npz_root, intri_path,
[rank0]: File "/home/user/EfficientLoFTR-main/src/lightning/data.py", line 272, in _build_concat_dataset
[rank0]: return ConcatDataset(datasets)
[rank0]: File "/usr/local/iCompute/envs/eloftr/lib/python3.8/site-packages/torch/utils/data/dataset.py", line 328, in init
[rank0]: assert len(self.datasets) > 0, "datasets should not be an empty iterable" # type: ignore[arg-type]
[rank0]: AssertionError: datasets should not be an empty iterable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants