Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use my own data set to deploy in the project #347

Open
BruceBaiz opened this issue Oct 15, 2024 · 6 comments
Open

Use my own data set to deploy in the project #347

BruceBaiz opened this issue Oct 15, 2024 · 6 comments

Comments

@BruceBaiz
Copy link

How can I use my own dataset to deploy in this project?

I would appreciate it if you could help me.

@ellisdg
Copy link
Owner

ellisdg commented Oct 21, 2024

Check out the the notebook for the BraTS example. You can change the filenames to point to your data as well as the other configuration settings. There are quite a few ways you may want to tune the configuration to fit your data depending on your use case.

@BruceBaiz
Copy link
Author

But I don't know how to process my medical images into your BraTS example dataset, can you tell me how to process jpg medical images or MP4 medical videos into your dataset?
I would appreciate it if you could help me a little!

@ellisdg
Copy link
Owner

ellisdg commented Oct 23, 2024

You'll want to use some sort of tool to convert to Nifti format. This tool might be able to do it:
https://github.com/rordenlab/i2nii

@BruceBaiz
Copy link
Author

Thank you very much for your help, but I have a new problem now.

Now I have some ultrasound images of the thyroid, some of which are labeled, and I have some medical videos of these thyroid images, but I'm not sure how to translate them properly into the 3dunet input format for this project?
If I had only converted 2d images to .nii format, which I tried, it would not have been possible as model input for the project.
If I need to convert 3d video to .nii as input to the model, but my video dataset is not labeled. I was hoping you could tell me how I can use this model with my dataset.

I would appreciate it if you could give me some help!

@BruceBaiz
Copy link
Author

(3dunet) root@dsw-692855-7f756f5c44-m2cfh:/mnt/workspace/3DUnetCNN/examples/TH2024# python /mnt/workspace/3DUnetCNN/unet3d/scripts/train.py --config_filename TH2024_config.json
2024-10-27 21:08:35,811 - root - INFO - Config: /mnt/workspace/3DUnetCNN/examples/TH2024/TH2024_config.json
2024-10-27 21:08:35,812 - root - INFO - Work Dir: /mnt/workspace/3DUnetCNN/examples/TH2024/TH2024_config
2024-10-27 21:08:35,812 - root - DEBUG - Found value '5' for key 'n_folds'
2024-10-27 21:08:35,812 - root - DEBUG - Could not find value for key 'random_seed'; default to 25
2024-10-27 21:08:35,819 - root - INFO - Running cross validation fold: /mnt/workspace/3DUnetCNN/examples/TH2024/TH2024_config/fold1.json
2024-10-27 21:08:35,819 - root - INFO - Config: /mnt/workspace/3DUnetCNN/examples/TH2024/TH2024_config/fold1.json
2024-10-27 21:08:35,819 - root - INFO - Work Dir: /mnt/workspace/3DUnetCNN/examples/TH2024/TH2024_config/fold1
2024-10-27 21:08:35,819 - root - INFO - Model: /mnt/workspace/3DUnetCNN/examples/TH2024/TH2024_config/fold1/model.pth
2024-10-27 21:08:35,819 - root - INFO - Log: /mnt/workspace/3DUnetCNN/examples/TH2024/TH2024_config/fold1/training_log.csv
2024-10-27 21:08:35,819 - root - DEBUG - Found value '[1, 2]' for key 'labels'
2024-10-27 21:08:35,819 - root - DEBUG - Found value 'True' for key 'setup_label_hierarchy'
2024-10-27 21:08:35,819 - root - INFO - Setting config["training"]["test_input"]=1
2024-10-27 21:08:35,820 - root - DEBUG - Could not find value for key 'add_contours'; default to False
2024-10-27 21:08:35,820 - root - DEBUG - Found value 'False' for key 'pin_memory'
2024-10-27 21:08:35,820 - root - DEBUG - Found value '1' for key 'n_workers'
2024-10-27 21:08:35,820 - root - DEBUG - Found value '1' for key 'test_input'
2024-10-27 21:08:35,820 - root - DEBUG - Found value '1' for key 'batch_size'
2024-10-27 21:08:35,820 - root - DEBUG - Found value '1' for key 'validation_batch_size'
2024-10-27 21:08:35,820 - root - DEBUG - Could not find value for key 'prefetch_factor'; default to 1
2024-10-27 21:08:35,820 - LoadImage - DEBUG - required package for reader pydicomreader is not installed, or the version doesn't match requirement.
2024-10-27 21:08:35,821 - LoadImage - DEBUG - required package for reader itkreader is not installed, or the version doesn't match requirement.
2024-10-27 21:08:35,821 - LoadImage - DEBUG - required package for reader nrrdreader is not installed, or the version doesn't match requirement.
/root/miniconda3/envs/3dunet/lib/python3.9/site-packages/monai/utils/deprecate_utils.py:321: FutureWarning: monai.transforms.croppad.dictionary CropForegroundd.init:allow_smaller: Current default value of argument allow_smaller=True has been deprecated since version 1.2. It will be changed to allow_smaller=False in version 1.5.
warn_deprecated(argname, msg, warning_category)
/root/miniconda3/envs/3dunet/lib/python3.9/site-packages/monai/data/dataset.py:374: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return torch.load(hashfile)
2024-10-27 21:08:36,304 - LoadImage - DEBUG - required package for reader pydicomreader is not installed, or the version doesn't match requirement.
2024-10-27 21:08:36,304 - LoadImage - DEBUG - required package for reader itkreader is not installed, or the version doesn't match requirement.
2024-10-27 21:08:36,305 - LoadImage - DEBUG - required package for reader nrrdreader is not installed, or the version doesn't match requirement.
2024-10-27 21:08:36,945 - root - DEBUG - Using criterion DiceLoss from monai with kwargs: {'include_background': True, 'sigmoid': True}
/root/miniconda3/envs/3dunet/lib/python3.9/site-packages/_distutils_hack/init.py:54: UserWarning: Reliance on distutils from stdlib is deprecated. Users must rely on setuptools to provide the distutils module. Avoid importing distutils or import setuptools first, and avoid setting SETUPTOOLS_USE_DISTUTILS=stdlib. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml
warnings.warn(
2024-10-27 21:08:38,023 - root - DEBUG - Found value '250' for key 'n_epochs'
2024-10-27 21:08:38,023 - root - DEBUG - Found value 'None' for key 'early_stopping_patience'
2024-10-27 21:08:38,023 - root - DEBUG - Found value 'True' for key 'save_best'
2024-10-27 21:08:38,023 - root - DEBUG - Found value 'None' for key 'save_every_n_epochs'
2024-10-27 21:08:38,023 - root - DEBUG - Found value 'None' for key 'save_last_n_models'
2024-10-27 21:08:38,023 - root - DEBUG - Found value 'False' for key 'amp'
2024-10-27 21:08:38,023 - root - DEBUG - Could not find value for key 'samples_per_epoch'; default to None
2024-10-27 21:08:38,023 - root - DEBUG - Could not find value for key 'training_iterations_per_epoch'; default to 1
Traceback (most recent call last):
File "/mnt/workspace/3DUnetCNN/unet3d/scripts/train.py", line 177, in
main()
File "/mnt/workspace/3DUnetCNN/unet3d/scripts/train.py", line 173, in main
run(config_filename, output_dir, namespace)
File "/mnt/workspace/3DUnetCNN/unet3d/scripts/train.py", line 76, in run
run(_config_filename, work_dir, namespace)
File "/mnt/workspace/3DUnetCNN/unet3d/scripts/train.py", line 131, in run
run_training(model=model.train(), optimizer=optimizer, criterion=criterion,
File "/mnt/workspace/3DUnetCNN/unet3d/train/train.py", line 55, in run_training
losses.append(epoch_training(training_loader, model, criterion, optimizer=optimizer, epoch=epoch,
File "/mnt/workspace/3DUnetCNN/unet3d/train/training_utils.py", line 60, in epoch_training
loss, batch_size = batch_loss(model, images, target, criterion, n_gpus=n_gpus, use_amp=use_amp)
File "/mnt/workspace/3DUnetCNN/unet3d/train/training_utils.py", line 98, in batch_loss
return _batch_loss(model, images, target, criterion, inferer=inferer)
File "/mnt/workspace/3DUnetCNN/unet3d/train/training_utils.py", line 111, in _batch_loss
loss = criterion(output, target)
File "/root/miniconda3/envs/3dunet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/envs/3dunet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/3dunet/lib/python3.9/site-packages/monai/losses/dice.py", line 169, in forward
raise AssertionError(f"ground truth has different shape ({target.shape}) from input ({input.shape})")
AssertionError: ground truth has different shape (torch.Size([1, 2, 128, 128, 128])) from input (torch.Size([1, 3, 128, 128, 128]))

Can you tell me why this is happening? Thank you so much!

@ellisdg
Copy link
Owner

ellisdg commented Oct 28, 2024

AssertionError: ground truth has different shape (torch.Size([1, 2, 128, 128, 128])) from input (torch.Size([1, 3, 128, 128, 128]))

The number of channels in the model output is different from the number of channels in the images being used as ground truth. One has 2 channels and the other has 3. I don't know which one is correct, but this should be fixable by changing the configuration file so that they match.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants