A research testbed on conditional variational autoencoders using Gaussian distributions as an input. We are interested in arbitrary conditioning of a CVAE and finding the relationships between information passing through the latent dimension bottlneck and the input dimensions. Our goal is to generate a fully factoriazable probabilistic model of structures in a cell.
- Free software: Allen Institute Software License
- Documentation: https://Gaussian-CVAE.readthedocs.io.
- Create conda environment
$ conda create --name cvae python=3.7
- Activate conda environment :
$ conda activate cvae
- Install requirments in setup.py
$ pip install -e .[all]
- Run all synthetic data models:
$ cd scripts
$ ./run_all_synthetic_datasets.sh
This will run mutliple synthetic experiments on 2 GPU's simultaneously. Each script can also be run individually. Here, we go through these one by one:
- Run baseline model.
$ cd scripts
$ ./baseline.sh
This model takes a set of independent Gaussian distributions as an input. Specify the number of input dimensions 'x_dim' in baseline_kwargs.json
Specifying 2 input dimensions gives
This plot can be viewed in outputs/baseline_results. The first component is the train and test loss. The other 3 plots are encoding tests of the model in the presence of different sets of conditions. 0 (blue) implies that no conditions are provided, and thus the model uses 2 latent dimensions in order to encode the information. 1 (orange) implies that one condition is provided, meaning the model needs only 1 latent dimension to encode the information. Finally, 2 (green) means that both conditions are provided, implying that the model needs no dimensions to encode the information, i.e all the information about the input data has been provided via the condition.
Similarly, specifying 4 input dimensions gives
specifying 6 input dimensions gives
and so on.
- Run projected baseline model. This model will take a set of independent Gaussian distributions as an input and project it to a higher dimension. Specify the number of input dimensions 'x_dim' and number of projected dimensions 'projection_dim' in baseline_kwargs_proj.json
$ ./baseline_projected.sh
Projecting 2 dimensions to 8 dimensions gives
This plot can be viewed in outputs/baseline_results_projected. The model uses only 2 dimensions in the latent space to encode information from a 4 dimensional input dataset.
Similarly, projecting 2 dimensions to 4 dimensions gives
projecting 4 dimensions to 8 dimensions gives
and so on.
- Run projected baseline model with a mask. This model will take a set of independent Gaussian distributions, project it to a higher dimensional space and then mask a percentage of the input data during training.
$ ./baseline_projected_with_mask.sh
Here we need to update the loss function to not penalize masked data. Without doing this, projecting 2 dimensions to 8 dimensions with 50% of the input data masked gives
After updating the loss, doing the same thing gives
Despite 50% of the data being masked, the model uses 2 dimensions in the latent space.
- Run swiss roll baseline model. This model will take the swiss roll dataset as an input.
$ ./baseline_swissroll.sh
The swiss roll dataset is parametrized as:
x = \phi \cos(\phi)
y = \phi \sin(\phi)
z = \psi
Despite having 3 dimensions, it is parametrized by 2 dimensions. Running this script gives
This plot can be viewed in outputs/baseline_results_swissroll. We observe that given 0 conditions (blue), the model gets embedded into only dimensions in the latent space. Providing 1 condition (X) is no different then providing 2 conditions (X and Y) since both X and Y are parameterized by only 1 dimension. Finally, providing both conditions means that no information passes throught the bottleneck and the model encodes no information.
- Run sklearn datasets model. This model will take the sklearn datasets like circles, moons and blobs as an input.
$ ./baseline_circles_moons_blobs.sh
The type of dataset (i.e. circles, moons, blobs or an s_curve) is specified in "sklearn_data" in baseline_kwargs_circles_moons_blobs.json. Running this file for blobs gives
Similarly for moons gives
This is how the original data maps to the latent space
Similarly for an s_curve gives
And for circles gives
- Run compare_models.py to compare results across output folders
- Visualize individual model runs or multiple model runs using the notebooks in CVAE_testbed/notebooks
- Run aics feature model. Here we pass 159 features (1 binary, 102 real and 56 one-hot features) through the CVAE
$ cd scripts
$ ./aics_features_simple.sh
Here is what the encoding looks like for a beta of 1
There is no information passing through the information bottleneck, i.e. the KL divergence term is near 0 and the model is close to the autoencoding limit.
We can vary beta and compare ELBO and FID scores in order to find the best model.
The project has the following structure:
CVAE_testbed/ |- README.rst |- setup.py |- requirements.txt |- tox.ini |- Makefile |- MANIFEST.in |- HISTORY.rst |- CHANGES.rst |- AUTHORS.rst |- LICENSE |- docs/ |- ... |- CVAE_testbed/ |- __init__.py |- main_train.py |- baseline_kwargs.json |- mnist_kwargs.json |- tests/ |- __init__.py |- test_function.py |- example.sh |- datasets/ |- __init__.py |- dataloader.py |- synthetic.py |- losses/ |- __init__.py |- ELBO.py |- metrics/ |- __init__.py |- blur.py |- calculate_fid.py |- inception.py |- visualize_encoder.py |- models/ |- __init__.py |- CVAE_baseline.py |- CVAE_first.py |- sample.py |- run_models/ |- __init__.py |- generative_metric.py |- run_synthetic.py |- run_test_train.py |- test.py |- train.py |- scripts/ |- __init__.py |- baseline.sh |- mnist.sh |- compare_models.py |- utils/ |- __init__.py |- compare_plots.py
We are not currently supporting this code, but simply releasing it to the community AS IS but are not able to provide any guarantees of support. The community is welcome to submit issues, but you should not expect an active response.
This package was created with Cookiecutter.