Releases: nmichlo/disent
Releases · nmichlo/disent
v0.0.1.dev13
Notable Changes:
- new Auto-Encoders:
Ae
TripletAe
(Ae
version ofTripletVae
)AdaAe
(Ae
version ofAdaVae
)AdaNegTripletAe
(Ae
version ofAdaNegTripletVae
)
- custom dataset MNIST example in the docs
Breaking Changes
- flattened
disent.frameworks.vae
anddisent.frameworks.ae
modules,unsupervised
,weaklysupervised
, andsupervised
submodules no longer exist. - remove latent parameter classes from VAEs, VAEs now directly encode distributions with the
encode_dists()
function, this simplified a lot of other code. - Datasets now only return
'x'
in the observation dictionary if anaugment
is specified, ~5% performance boost - some dependencies are optional, more work is still required to minimise dependencies
- Removed
sample_random_traversal_factors
,sample_random_cycle_factors
fromStateSpace
and replaced with generic functionsample_random_factor_traversal
- renamed all autoencoders
AE
toAe
Other Changes:
- hdf5 dataset performance fix, now up to 5x faster when not loaded into memory
- all Auto-Encoders have new config options to disable the augment loss, recon loss, or detach the decoder so that no loss flows back through the encoder. VAEs can additionally have the regularisation loss disabled.
- new
laplace
latent distribution, can be specified in VAE configs. - triplet loss helper functions
- flatness components metric helper functions for use elsewhere:
compute_linear_score
,compute_axis_score
FftKernel
augment module inheriting fromtorch.nn.Module
, applies a channel-wise convolution to the input.to_standardised_tensor
fix for non-PIL.Image.Image
inputs- more math helper functions:
torch_normalize
normalise values along an axis between 0 and 1torch_mean_generalized
now supports thekeepdim
argument
disent.visualise.visualise_module
removed old redundant code adapted from disentanglement_libdisent.visualise.visualise_util
additionsmake_image_grid
andmake_animated_image_grid
auto-detect border colour from input dtype- replaced
cycle_factor
withget_factor_traversal
that accepts different modes:interval
andcycle
- cleaned up experiments
++ many more additions and minor fixes ++
v0.0.1.dev12
Large Release
- utility additions
- dct
- kernels: gaussian + box
- conv2d channel wise
- differentiable sorting, spearman rank loss
- ground truth dataset with factors
- more reconstruction losses
- kernel reconstruction losses
- recon loss fixes
- parameterised recon losses
- scaled hard averaging for adatvae and adanegtvae
- DataOverlapRankVAE - uses differentiable sorting to optimise spearman rank correlation coefficient instead of triplet loss
- DataOverlapTripletVAE - fixes, simplifications, moved out triplet mining
- removed unnecessary metric values
- Conv64Alt encoder and decoder that support normalisation layers for faster convergence
- FFT gaussian and box blur augments
- more experiment schedules
- more experiments
And much more...
v0.0.1.dev11
- fixed init files
v0.0.1.dev10
frameworks
- simplified ada frameworks
- moved schedules out of ada frameworks into configs
- extra kl divergence modes
metrics
- combined flatness components
- axis alignment ratio
- linearity ratio
- incorrect swap ratio
experiments
- existing configs should be frozen -- changes should be added to experiment scripts below
- helper script
- experiment scripts
more
- and much more
v0.0.1.dev9
metrics
- flatness components
- reworked linearity component, now uses PCA to measure linearity along single arbitrary basis, and variance of embeddings to measure linearity along axis.
dataset wrappers
- new random dist dataset
- only triplets have some sort of order, otherwise everything is sampled randomly
- RandomEpisodeDataset now has RandomDataset as parent
torch util - math
- PCA functions
v0.0.1.dev8
- renamed
flatness components
metric (originally dual flatness)
v0.0.1.dev7
metrics
- dual flatness metric
- measures linearity and ordering of latent traversals
- ordering over randomly sampled embeddings compared to ground truth factors
utils
- torch math helper functions
- Covariance matrix
- Pearson's correlation matrix
- Spearman's rank correlation matrix
- Generalised mean (p from
-inf
toinf
, special cases for harmonic, geometric, arithmetic, min, max, quadratic means)
state spaces
- fixes for
pos_to_idx
andidx_to_pos
with array sizes with more than 1 dimension (excluding last factor or idx dim)
v0.0.1.dev6
new frameworks
- Beta-TCVAE (probably wrong, needs verification & correct loss scaling)
frameworks
- DFC-VAE input fixes & support for 1 channel
v0.0.1.dev5
new frameworks
- DIP-VAE
- Info-VAE
- EXPERIMENTAL: Data Overlap TVAE
frameworks
- rewrite of frameworks extending from single VAE class, hooks are now made available for easy overrides. Removed lots of duplicate code.
datasets
- Fully random paired datasets
v0.0.1.dev4
cfg
- renamed
loss_rediction
modebatch_mean
tomean_sum
- changed default
loss_reduction
mode tomean
- changed default
beta
to match newloss_reduction
mode
flatness metric
- added average angle along traversals
- simplified greatly
reconstruction loss
- more reconstruction losses based off of distributions
bernoulli
continuous_bernoulli
normal
shedules
- new schedules:
- NoopSchedule: does absolutely nothing!
- CosineWaveSchedule: smooth cosine wave
- Adjusted arguments of most schedules
bugs
- fixed various runtime bugs
- logging crash for W&B
- disabled checkpointing in trainer
- flatness metric did not support axis size < 2