Skip to content

Releases: mathpluscode/ImgX-DiffSeg

Release v0.3.2

31 Dec 04:29
277eee4
Compare
Choose a tag to compare

⚠️ This release changed network architecture and training strategies. Therefore the previous checkpoints will not be compatible. The changes may also impact model performance.

Added

  • Added dropout in U-net, this may increase the memory consumption.
  • Support anisotropic volumes for data augmentation.
  • Added data augmentation including, random gamma adjustment, random flip, random shearing.
  • Added registration related metrics and losses.

Changed

  • ⚠️ Moved package imgx_datasets into imgx/datasets as submodule.
  • 😃 Moved data set iterator out of Experiment to facilitate using non-TFDS data sets.
  • Aligned Transformer to haiku implementation.
  • Used jax.random.fold_in for random key splitting to avoid passing key between functions.
  • Used optax.softmax_cross_entropy to replace custom implementation.

Release v0.3.1

25 Nov 02:55
5349fdb
Compare
Choose a tag to compare

Added

  • Added example notebooks for inference on a single image without TFDS.
  • Added integration tests for training, validation, and testing.

Changed

  • ⚠️ Upgrade to JAX to 0.4.20.
  • ⚠️ Removed Haiku-specific modification to convolutional layers. This may impact model
    performance.
  • Refactored config.
    • Added patch_size and scale_factor to data config.
    • Moved loss config from the main config to task config.
  • Refactored code, including defining imgx/task submodule.

A Recycling Training Strategy for Medical Image Segmentation with Diffusion Denoising Models

22 Oct 17:59
4dcac5f
Compare
Choose a tag to compare

Deep Generative Models workshop @ MICCAI 2023

30 Aug 17:31
1409624
Compare
Choose a tag to compare

This tag corresponds to the code used for the paper Importance of Aligning Training Strategy with Evaluation for Diffusion Models in 3D Multiclass Segmentation accepted at the Deep Generative Models workshop @ MICCAI 2023.