Awesome work on the VAE, disentanglement, representation learning, and generative models.
I gathered these resources (currently @ ~900 papers) as literature for my PhD, and thought it may come in useful for others. This list includes works relevant to various topics relating to VAEs. Sometimes this spills over to topics e.g. adversarial training and GANs, general disentanglement, variational inference, flow-based models and auto-regressive models. Always keen to expand the list - feel free to contribute or email me if I've missed your paper off the list : ]
They are ordered by year (new to old).
EVA: Generating Longitudinal Electronic Health Records Using Conditional Variational Autoencoders. Biswal, Ghosh, Duke, Malin, Stewart. https://arxiv.org/pdf/2012.10020.pdf
Cauchy-Schwarz Regularized Autoencoder. Tran, Prantic, Deisenroth. https://arxiv.org/pdf/2101.02149.pdf
Cross-Domain Variational Adversarial Autoencoder. Bai, Tian, Zhang, Yang. https://www.sciengine.com/doi/10.3724/SP.J.1089.2020.18115
Deep Generative Models of Gravitational Waveforms via Conditional Variational Autoencoder. Liao, Lin. https://arxiv.org/pdf/2101.06685.pdf
Face Image Inpainting via Variational Autoencoder. Zhang, Cheng, Bai, Zhang, Sun, Wang. https://www.sciengine.com/doi/10.3724/SP.J.1089.2020.17938
A Semi-supervised Approach for Early Identifying the Abnormal Carotid Arteries Using a Modified Variational Autoencoder. Huang, Cui, Wu, Li. https://ieeexplore.ieee.org/abstract/document/9313193/
Item Recommendation based on Multimedia Variational Autoencoder. Ran, Chen, Mu. https://iopscience.iop.org/article/10.1088/1755-1315/632/4/042044/pdf
Disentangled Recurrent Wasserstein Autoencoder. Han, Min, Lan, Li, Zhang. https://arxiv.org/pdf/2101.07496.pdf
Spatiotemporal Trajectories in Resting-state FMRI Revealed by Convolutional Variational Autoencoder. Zhang, Maltbie, Keilholz. https://www.biorxiv.org/content/biorxiv/early/2021/01/26/2021.01.25.427841.full.pdf
A Missing Data Imputation Method for 3D Object Reconstruction using Multi-modal Variational Autoencoder. Yu, Oh. https://arxiv.org/pdf/2101.10391.pdf
Hierarchical Domain Invariant Variational Auto-Encoding with weak domain supervision. Sun, Buettner. https://arxiv.org/pdf/2101.09436.pdf
Disentangled Sequence Clustering for Human Intention Inference. Zolotas, Demiris. https://arxiv.org/pdf/2101.09500.pdf
Improved Training of Sparse Coding Variational Autoencoder via Weight Normalization. Jiang, de la Iglesia. https://ui.adsabs.harvard.edu/abs/2021arXiv210109453P/abstract
Semi-Supervised Disentanglement of Class-Related and Class-Independent Factors in VAE. Hajimiri, Lotfi, Baghshah. https://arxiv.org/pdf/2102.00892.pdf
Video Reenactment as Inductive Bias for Content-Motion Disentanglement. Albarracin, Rivera. https://arxiv.org/pdf/2102.00324.pdf
pmVAE: Learning Interpretable Single-Cell Representations with Pathway Modules. Gut, Stark, Ratsch, Davidson. https://www.biorxiv.org/content/biorxiv/early/2021/01/30/2021.01.28.428664.full.pdf
Hierarchical Variational Autoencoder for Visual Counterfactuals. Vercheval, Pizurica. https://arxiv.org/pdf/2102.00854.pdf
Probabilistic Models with Deep Neural Networks. Masegosa, Cabanas, Langseth, Nielsen, Salmeron http://repositorio.ual.es/bitstream/handle/10835/9516/entropy-23-00117-v2.pdf?sequence=1&isAllowed=y
Variational Bayes survival analysis for unemployment modelling. Boskoski, Perne, Ramesa, Boshkoska. https://arxiv.org/pdf/2102.02295.pdf
Neighborhood Geometric Structure-Preserving Variational Autoencoder for Smooth and Bounded Data Sources. Chen, Wang, Lan, Zheng, Zeng. https://pubmed.ncbi.nlm.nih.gov/33556022/
A Method for Constructing Supervised Time Topic Model Based on Variational Autoencoder. Gou, Li, Huo. https://www.hindawi.com/journals/sp/2021/6623689/
A Variational Information Bottleneck Approach to Multi-Omics Data Integration. Lee, van der Schaar. https://arxiv.org/pdf/2102.03014.pdf
Using Deconvolutional Variational Autoencoder for Answer Selection in Community Question Answering. Boroujeni, Faili. https://ieeexplore.ieee.org/abstract/document/9345624/
Data augmentation using a variational autoencoder for estimating property prices. Lee. https://www.emerald.com/insight/content/doi/10.1108/PM-09-2020-0057/full/html
A conditional variational autoencoder based self-transferred algorithm for imbalanced classification. Zhao, Hao, Tang, Chen, Wei. https://www.sciencedirect.com/science/article/abs/pii/S0950705121000198
Layer VQE: A Variational Approach for Combinatorial Optimization on Noisy Quantum Computers. Liu, Angone, Shaydulin, Safro, Alexeev, Cincio. https://arxiv.org/pdf/2102.05566.pdf
Automatic variational inference with cascading flows. Ambrogioni, Silvestra, van Gerven. https://arxiv.org/pdf/2102.04801.pdf
On Disentanglement in Gaussian Process Variational Autoencoders. Bing, Fortuin, Ratsch. https://arxiv.org/pdf/2102.05507.pdf
Invertible local image descriptors learned with variational autoencoders. Zikakic, Pizurica. https://www.ieice.org/publications/proceedings/summary.php?expandable=8&iconf=ICTF&session_num=ICTF_5&number=ICTF2020_paper_24&year=2020
Meta-Learning Divergences for Variational Inference. Zhang, Li, Sa, Devlin, Zhang. http://proceedings.mlr.press/v130/zhang21o/zhang21o.pdf
Collaborative Denoising Graph Attention Autoencoders for Social Recommendation. Mu, Zha, Zhao, Gong. https://ksiresearch.org/seke/seke20paper/paper105.pdf
Variational Inference MPC using Tsallis Divergence. Wang, So, Gibson, Vlahov, Gandhi. https://arxiv.org/pdf/2104.00241.pdf
Dual Online Stein Variational Inference for Control and Dynamics. Barcelos, Lambert, Oliveira, Borges, Boots, Ramos. https://arxiv.org/pdf/2103.12890.pdf
DRANet: Disentangling Representation and Adaptation Networks for Unsupervised Cross-Domain Adaptation. Lee, Cho, Im. https://arxiv.org/pdf/2103.13447.pdf
SeqVAE: Sequence variational autoencoder with policy gradient. Gao, Cui, Ding. https://link.springer.com/article/10.1007/s10489-021-02374-7
Permutation-Invariant Variational Autoencoder for Graph-Level Representation Learning. Winter, Noe, Clevert. https://arxiv.org/pdf/2104.09856.pdf
Deep Mixture Generative Autoencoders. Ye, Bors. https://eprints.whiterose.ac.uk/173261/1/MixtureVAE_Dropout.pdf
Genetic Constrained Graph Variational Autoencoder for COVID-19 Drug Discovery. Cheng, Fan, Wang. https://arxiv.org/pdf/2104.11674.pdf
Transformer-based Conditional Variational Autoencoder for Controllable Story Generation. Fang, Zeng, Liu, Bo, Dong, Chen https://arxiv.org/pdf/2101.00828.pdf
Factor analysis, probabilistic principal component analysis, variational inference, and variational autoencoder: tutorial and survey. Ghojogh, Ghodsi, Karray, Crowley https://arxiv.org/pdf/2101.00734.pdf
Variational Selective Autoencoder: Learning from Partially-Observed Heterogeneous Data. Yong, Hajimisadeghi, He, Durand, Mori https://arxiv.org/pdf/2102.12679.pdf
Predicting Video with VQVAE. Walker, Razavi, van den Oord https://arxiv.org/pdf/2103.01950.pdf
A survey on Variational Autoencoders from a GreenAI perspective. Asperti, Evangelista, Piccolomini https://arxiv.org/pdf/2103.01071.pdf
crank: An Open-Source Software for Nonparallel Voice Conversion Based on Vector-Quantized Variational Autoencoder. Kobayashi, Huang, Wu, Tobing, Hayashi, Toda https://arxiv.org/pdf/2103.02858.pdf
On Disentanglement in Gaussian Process Variational Autoencoders. Bing, Fortuin, Ratsch https://arxiv.org/pdf/2102.05507.pdf
Bilateral Variational Autoencoder for Collaborative Filtering. Truong, Salah, Lauw https://dl.acm.org/doi/abs/10.1145/3437963.3441759
Variational Structured Attention Networks for Deep Visual Representation Learning. Yang, Rota, Alameda-Pineda https://arxiv.org/pdf/2103.03510.pdf
Improving Bayesian Inference in Deep Neural Networks with Variational Structured Dropout. Nguyen, Nguyen, Nguyen, Ho, Than Bui https://arxiv.org/pdf/2102.07927.pdf
A variational inference approach to learning multivariate world processes. Etesami, Trouleau, Kiyavash, Grossglauser, Thiran. http://proceedings.mlr.press/v130/etesami21a.html
Heterogeneous Hypergraph Variational Autoencoder for Link Prediction. Fan, Zhang, Wei, Li, Zou, Gao, Dai https://ieeexplore.ieee.org/abstract/document/9354594/
Certifiably Robust Variational Autoencoders. Barrett, Camuto, Willetts, Rainforth https://arxiv.org/pdf/2102.07559.pdf
Diagnosing Vulnerability of Variational Auto-Encoders to Adversarial Attacks. Kuzina, Welling, Tomczak https://arxiv.org/pdf/2103.06701.pdf
Training Generative Adversarial Networks in One Stage. Shen, Yin, Wang, LI, Song, Song https://arxiv.org/pdf/2103.00430.pdf
Parallel Variational Autoencoders for Multiple Responses Generation. Li, Fu, Lin, Wang https://conferences.computer.org/ispapub/pdfs/ISPA-BDCloud-SocialCom-SustainCom2020-61uthIiswrO37XTCl0drpO/319900a128/319900a128.pdf
VDSM: Unsupervised Video Disentanglement with State-Space Modeling and Deep Mixtures of Experts. Vowels, Camgoz, Bowden https://arxiv.org/pdf/2103.07292.pdf
A Semi-Supervised Learning Method for MiRNA-Disease Association Prediction Based on Variational Autoencoder. Ji, Wang, Go, Li, Ni, Zheng https://europepmc.org/article/med/33735084
Using latent space regression to analyze and leverage compositionality in GANs. Chai, Wulff, Isola https://arxiv.org/pdf/2103.10426.pdf
A Variational Inference Framework for Inverse Problems. Maestrini, Aykroyd, Wand. https://arxiv.org/pdf/2103.05909.pdf
On Multilevel Monte Carlo Unbiased Gradient Estimation For Deep Latent Variable Models. Shi, Cornish http://proceedings.mlr.press/v130/shi21d/shi21d.pdf
Neighbor Embedding Variational Autoencoder. Tu, Liu, Xue, Wang, Guo https://arxiv.org/pdf/2103.11349.pdf
Adversarial and Contrastive Variational Autoencoder for Sequential Recommendation. Xie, Liu, Zhang, Lu, Wang, Ding https://arxiv.org/pdf/2103.10693.pdf
Representation learning via invariant causal mechanisms. Mitrovic, McWilliams, Walker, Buesing, Blundell https://openreview.net/pdf/34eb2506b5a0b489bced58ab4bb038ff7356ade7.pdf
GLOWin: A Flow-based Invertible Generative Framework for Learning Disentangled Feature Representations in Medical Images. Sankar, Keicher, Eisawy, Parida, Pfister, Kim, Navab https://arxiv.org/pdf/2103.10868.pdf
Generative Particle Variational Inference via Estimation of Functional Gradients. Ratzlaff, Bai, Fuxin, Xu https://arxiv.org/pdf/2103.01291.pdf
Meta-Learning Divergences of Variational Inference. Zhang, Li, De Sa, Devlin, Zhang http://proceedings.mlr.press/v130/zhang21o/zhang21o.pdf
Rate-Regularization and Generalization in VAEs. Bozkurt, Esmaeili, Tristan, Brooks, Dy, van de Meent http://proceedings.mlr.press/v130/bozkurt21a/bozkurt21a.pdf
Data Generation in Low Sample Size Setting Using Manifold Sampling and a Geometry-Aware VAE. Chadebec, Allassonniere https://arxiv.org/pdf/2103.13751.pdf
Contrastive Disentanglement in Generative Adversarial Networks. Pan, Tang, Chen, Xu https://arxiv.org/pdf/2103.03636.pdf
SetVAE: Learning Hierarchical Composition for Generative Modeling of Set-Structured Data. Kim, Yoo, Lee, Hong https://arxiv.org/pdf/2103.15619.pdf
Hierarchical and Self-Attended Sequence Autoencoder. Chien, Wang https://ieeexplore.ieee.org/abstract/document/9384306/
NeRF-VAE: A Geometry Aware 3D Scene Generative Model. Kosiorek, Strathmann, Zoran, Moreno https://arxiv.org/pdf/2104.00587.pdf
Intact-VAE: Estimating Treatment Effects under Unobserved Confounding. Wu, Fukumizu https://arxiv.org/pdf/2101.06662.pdf
Dirichlet Process Prior for Student?s t Graph Variational Autoencoders. Zhao, Huang https://search.proquest.com/openview/2954e8d3c474db03241dabd98d7cac18/1?pq-origsite=gscholar&cbl=2032396
Variational Autoencoder Analysis of Ising Model Statistical Distributions and Phase Transitions. Yevick https://arxiv.org/pdf/2104.06368.pdf
Emotional Dialogue Generation Based on Conditional Variational Autoencoder and Dual Emotion Framework. Deng, Lin, Huang, Lan, Luo https://www.hindawi.com/journals/wcmc/2020/8881616/
Soft-IntroVAE: Analyzing and Improving the Introspective Variational Autoencoder. Daniel, Tamar https://arxiv.org/pdf/2012.13253.pdf
Causal World Models by Unsupervised Deconfounding of Physical Dynamics. Li, Yang, Liu, Chen, Chen, Wang https://arxiv.org/pdf/2012.14228.pdf
Recent Advances in Variational Autoencoders With Representation Learning for Biomedical Informatics: A Survey. Wei, Mahmood https://ieeexplore.ieee.org/abstract/document/9311619/
Conditional out-of-distribution generation for unpaired data using transfer VAE. Lotfollahi, Naghipourfar, Theis, Wolf https://academic.oup.com/bioinformatics/article/36/Supplement_2/i610/6055927?login=true
Learning Energy-Based Model with Variational Auto-Encoder as Amortized Sampler. Zie, Zheng, Li https://arxiv.org/pdf/2012.14936.pdf
Infer-AVAE: An Attribute Inference Model Based on Adversarial Variational Autoencoder. Zhou, Ding, Liu, Shen, Tong, Guan https://arxiv.org/pdf/2012.15005.pdf
Learning Invariant Representations for Deep Latent Variable Models. Wieser https://edoc.unibas.ch/79859/1/PhDThesis_MW.pdf
Towards causal generative scene models via competition of experts. von Kugelgen, Ustyuzhaninov, Gehler, Bethge, Scholkopf https://arxiv.org/pdf/2004.12906.pdf
A correspondence variational autoencoder for unsupervised acoustic word embeddings. Peng, Kamper, Livescu https://arxiv.org/pdf/2012.02221.pdf
Industrial process modeling and fault detection with recurrent Kalman variational autoencoder. Zhang, Zhu, Liu, Ge https://ieeexplore.ieee.org/abstract/document/9275274/
A data reconstruction method based on adversarial conditional variational autoencoder. Ren, Liu, Zhang, Jiang, Luo https://ieeexplore.ieee.org/abstract/document/9275168/
Quantifying common support between multiple treatment groups using a contrastive-VAE. Dai, Stultz http://proceedings.mlr.press/v136/dai20a/dai20a.pdf
Dynamics -based peptide-MHC binding optimization by a convolutional variational autoencoder: a use-case model for CASTELO. Bell, Domeniconi, Yang, Zhou, Zhang, Cong https://arxiv.org/pdf/2012.00672.pdf
Prior flow variational autoencoder: a density estimation model for non-intrusive load monitoring. Henriques, Morgan, Colcher https://arxiv.org/pdf/2011.14870.pdf
Semi-supervised bearing fault diagnosis and classification using variational autoencoder-based deep generative models. Zhang, Ye, Wang, Habetler https://ieeexplore.ieee.org/abstract/document/9270010/
Manga filling style conversion with screentone variational autoencoder. Xie, Li, Liu, Wong https://dl.acm.org/doi/abs/10.1145/3414685.3417873
World model as a graph: learning latent landmarks for planning. Zhang, Yang, Stadie https://arxiv.org/pdf/2011.12491.pdf
A statistical evaluation of machine learning algorithms for applied data analysis. Beaulac http://probability.ca/jeff/ftpdir/cedricthesis.pdf
Unsupervised learning of disentangled representations in deep restricted kernel machines with orthogonality constraints. Tonin, Patrinos, Suykens https://arxiv.org/pdf/2011.12659.pdf
Very deep VAEs generalize autoregressive models and can outperform them on images. Anonymous https://openreview.net/forum?id=RLRXCV6DbEJ
Deep variational autoencoder for mapping functional brain networks. Qiang, Dong, Ge, Lian, et al https://ieeexplore.ieee.org/abstract/document/9204760
Variational autoencoder based unsupervised domain adaptation for semantic segmentation. Li, Togo, Ogawa, Haseyama https://ieeexplore.ieee.org/abstract/document/9190973/
Variational autoencoder bidirectional long and short-term memory neural network soft-sensor model based on batch training strategy. Xie, Wang, Xing, Guo, Guo, Zhu https://ieeexplore.ieee.org/abstract/document/9200687
On the transferability of VAE embeddings using relational knowledge with semi-supervision. Stromfelt, Dickens, Garcez, Russo https://arxiv.org/pdf/2011.07137.pdf
Mutual Information based Method for Unsupervised Disentanglement of Video Representation. Sreekar, Tiwari, Namboodiri https://arxiv.org/pdf/2011.08614.pdf
Factorized Gaussian Process Variational Autoencoders. Jazbec, Pearce, Fortuin https://arxiv.org/pdf/2011.07255.pdf
Speech prediction in silent videos using variational autoencoders. Yadav, Sardana, Namboodiri, Hegde https://arxiv.org/pdf/2011.07340.pdf
VAEPP: Variational Autoencoder with a Pull-Back Prior. Chen, Liu, Cai, Xu, Pei https://link.springer.com/chapter/10.1007/978-3-030-63836-8_31
Learning Disentangled Representations with Attentive Joint Variational Autoencoder. Li, Liu, Chen, Luo https://link.springer.com/chapter/10.1007/978-3-030-63823-8_47
Quantifying and learning disentangled representations with limited supervision. Tonnaer, Rey, Menkovski, Holenderski, Portegies https://arxiv.org/pdf/2011.06070.pdf
Face Anti-Spoofing via Disentangled Representation Learning. Zhang, Yao, Zhang, Tai, Ding, Li, Huang, Song, Ma https://link.springer.com/chapter/10.1007%2F978-3-030-58529-7_38
VCE: Variational Convertor-Encoder for One-Shot Generalization. Li, Han, Xing https://arxiv.org/pdf/2011.06246.pdf
Mixtures of variational autoencoders. Ye, Bors https://www.researchgate.net/profile/Adrian_Bors/publication/345655967_Mixtures_of_Variational_Autoencoders/links/5faa0b9392851cc286a4e201/Mixtures-of-Variational-Autoencoders.pdf
Implicit supervision for fault detection and segmentation of emerging fault types with deep variational autoencoders. Chao, Adey, Fink https://www.researchgate.net/profile/Manuel_Arias_Chao/publication/338292289_Implicit_supervision_for_fault_detection_and_segmentation_of_emerging_fault_types_with_Deep_Variational_Autoencoders/links/5f99f697299bf1b53e4ed637/Implicit-supervision-for-fault-detection-and-segmentation-of-emerging-fault-types-with-Deep-Variational-Autoencoders.pdf
Towards Effective Intrusion Detection Using Log-cosh Conditional Variational AutoEncoder. Xu Li, Yang, Shen https://ieeexplore.ieee.org/abstract/document/9244068/
Improving Variational Autoencoder for Text Modelling with Timestep-Wise Regularisation. Li, Li, Chen, Lin https://arxiv.org/pdf/2011.01136.pdf
SNF?CVAE: Computational method to predict drug?disease interactions using similarity network fusion and collective variational autoencoder. Jarada, Rokne, Alhajj https://www.sciencedirect.com/science/article/abs/pii/S0950705120307140
MAD-VAE: manifold awareness defence variational autoencoder. Morkock, Wang https://arxiv.org/pdf/2011.01755.pdf
Fraud detection via deep neural variational autoencoder oblique random forest. Anh, Khanh, Dat, Amouroux, Solanki https://ieeexplore.ieee.org/abstract/document/9242753
Counterfactual Fairness with disentangled causal effect variational autoencoder. Kim, Shin, Jang, Song, Joo, Kang, Moon https://arxiv.org/pdf/2011.11878.pdf
Integration of variational autoencoder and spatial clustering for adaptive multi-channel neural speech separation. Zmolikova, Delcroix, Burget, Makatani, Cernocky https://arxiv.org/pdf/2011.11984.pdf
Use of student's t-distribution for the latent layer in a coupled variational autoencoder. Chen, Svoboda, Nelson https://arxiv.org/pdf/2011.10879.pdf
An intelligent music generation based on variational autencoder. Wang, Liu, Jin, Li, Ma https://ieeexplore.ieee.org/abstract/document/9262797
Vector embeddings with subvector permutation invariance using a triplet enhanced autoencoder. Matties https://arxiv.org/pdf/2011.09550.pdf
CompRess: self-supervised learning by compressing representations. Koohpayegani, Tejankar, Pirsiavash https://proceedings.neurips.cc/paper/2020/file/975a1c8b9aee1c48d32e13ec30be7905-Paper.pdf
Learning latent representations to influence multi-agent interaction. Xie, Losey, Tolsma, Finn, Sadigh http://iliad.stanford.edu/pdfs/publications/xie2020learning.pdf
Wavelet flow: fast training of high resolution normalizing flows. Yu, Derpanis, Brubaker https://arxiv.org/pdf/2010.13821v1.pdf
Embedding of molecular structure using molecular hypergraph variational autoencoder with metric learning. Koge, Ono, Huang, Altaf-Ul-Amin, Kanaya https://onlinelibrary.wiley.com/doi/pdf/10.1002/minf.202000203
Anomaly detection of heat energy usage in district heating substations using LSTM based variational autoencoder combined with physical model. Zhang, Fleyeh https://ieeexplore.ieee.org/abstract/document/9248108/
Semi-supervised learning by latent space energy-based model of symbol-vector coupling. Pang, Nijkamp, Cui, Han, Wu https://arxiv.org/pdf/2010.09359.pdf
Learning disentangled representations of video with missing data. Massague, Zhang, Feric, Camps, Yu https://proceedings.neurips.cc/paper/2020/file/24f2f931f12a4d9149876a5bef93e96a-Paper.pdf
Exemplar VAE: linking generative models, nearest neighbor retrieval, and data augmentation. Norouzi, Fleet, Norouzi https://proceedings.neurips.cc/paper/2020/file/63c17d596f401acb520efe4a2a7a01ee-Paper.pdf
Learning disentangled representations and group structure of dynamical environments. Quessard, Barrett, Clements https://papers.nips.cc/paper/2020/file/e449b9317dad920c0dd5ad0a2a2d5e49-Paper.pdf
The autoencoding variational autoencoder . Cemgil, Ghaisas, Dvijotham, Gowal, Kohli https://proceedings.neurips.cc/paper/2020/file/ac10ff1941c540cd87c107330996f4f6-Paper.pdf
Gradient boosted normalizing flows. Giaquinto, Banerjee https://proceedings.neurips.cc/paper/2020/file/fb5d9e209ebda9ab6556a31639190622-Paper.pdf
The evidence lower bound of variational autoencoders converges to a sum of three entropies. Lucke, Forster, Dai https://arxiv.org/pdf/2010.14860.pdf
Statistical guarantees for transformation based models with applications to implicit variational inference. Plummer, Zhou, Bhattacharya, Dunson, Pati https://arxiv.org/pdf/2010.14056.pdf
Scalable gaussian process variational autoencoders. Jazbec, Fortuin, Pearce, Mandt, Ratsch https://arxiv.org/pdf/2010.13472.pdf
A sober look at the unsupervised learning of disentangled representations and their evaluation. Locatello, Bauer, Lucic, Ratsch, Gelly, Scholkopf, Bachem https://www.jmlr.org/papers/volume21/19-976/19-976.pdf
Verification of field data and forecast model based on variational autoencoder in the application to the mechanized fund. Volkov, Dakhova, Andrianova, Budennyy https://www.onepetro.org/conference-paper/SPE-201936-MS
Content based image retrieval using deep convolutional variational autoencoder. Deore, Patil, Patil http://proteusresearch.org/gallery/pj-2654-53-f.pdf
Discrete latent space world models for reinforcement learning. Robine, Uelwer, Harmeling https://arxiv.org/pdf/2010.05767.pdf
Deep sequential latent variable models for noisy time series prediction. Qiu https://search.proquest.com/openview/278346ea1814c782db5840ed41ee78cd/1?pq-origsite=gscholar&cbl=18750&diss=y
Geometry-aware Hamiltonian variational autoe-encoder. Chadebec, Mantoux, Allassonniere https://arxiv.org/pdf/2010.11518.pdf
Disentangling action sequences: finding correlated images. Wu, Wang https://arxiv.org/pdf/2010.11684.pdf
Max-affine spline insights into deep generative networks anonymous. https://openreview.net/pdf?id=uLwplzQgAk7
Quaternion-valued variational autoencoder. Grassucci, Comminiello, Uncini https://arxiv.org/pdf/2010.11647.pdf
Variational autoencoder based enhanced behavior characteristics classification for social robot detection. Deng, Dai, Sun, Lv https://link.springer.com/chapter/10.1007/978-981-15-9129-7_17
Learning optimal conditional priors for disentangled representations. Mita, Filippone, Michiardi https://arxiv.org/pdf/2010.09360.pdf
Unsuperivsed disentanglement of pitch and timbre for isolated musical instrument sounds. Luo, Cheuk, Nakano, Goto, Herremans https://program.ismir2020.net/static/final_papers/162.pdf
On the surprising similarities between supervised and self-supervised models. Geirohs, Narayanappa, Mitzkus, Bethge, Wichmann, Brendel https://arxiv.org/pdf/2010.08377.pdf
Sparse Gaussian process variational autoencoders. Ashman, So, Tebbutt, Fortuin, Pearce, Turner https://arxiv.org/pdf/2010.10177.pdf
Laughter synthesis: A comparison between variational autoencoder and autoencoder. Mansouri, Lachiri https://ieeexplore.ieee.org/abstract/document/9231607
Variational autoencoder with global and mediu timescale auxiliaries for emotion recognition from speech. Almotlak, Weber, Wermter https://link.springer.com/chapter/10.1007/978-3-030-61609-0_42
Non-parallel voice conversion based on hierarchical latent embedding vector quantized variational autoencoder. Ho, Akagi https://www.isca-speech.org/archive/VCC_BC_2020/pdfs/VCC2020_paper_21.pdf
Plan-CVAE: a planning-based conditional variational autoencoder for story generation. Wang, Li, Zhao, Yan http://www.cips-cl.org/static/anthology/CCL-2020/CCL-20-083.pdf
Multi-task variational autoencoder for lung cancer prognosis on clinical data. Vo, Lee, Yang http://manuscriptlink-society-file.s3.amazonaws.com/kism/conference/sma2020/presentation/SMA-2020_paper_64.pdf
Semi-supervised learning with temperoal variational auto-encoders for the diagnosis of failure severities and the prognosis of remaining useful life. Silva http://repositorio.uchile.cl/bitstream/handle/2250/177135/Semi-supervised-learning-with-temporal-variational-auto-encoders-for-the-diagnosis-of-failure-severities-and-the-prognosis-of.pdf?sequence=1
The NeteaseGames System for voice conversion challenge 2020 with vector-quantization variational autoencoder and wavenet. Zhang https://arxiv.org/pdf/2010.07630.pdf
Adversarial networks and autoencoders: the primal-dual relationship and generalization bounds. Husain, Nock, Williamson https://arxiv.org/abs/1902.00985
Self Normalizing Flows. Keller, Peters, Jaini, Hoogeboom, Forre, Welling https://arxiv.org/abs/2011.07248
VAE-SNE: a deep generative model for simultaneous dimensionality reduction and clustering. Graving, Couzin https://www.biorxiv.org/content/10.1101/2020.07.17.207993v1
Generative neurosymbolic machines. Jiang, Ahn https://arxiv.org/abs/2010.12152
Revisiting factorizing aggregated posterior in learning disentangled representations. Cheng, Li, Wang, Gu, Xu, Li, Metze https://arxiv.org/abs/2009.05739
Baseline system of voice conversion challenge 2020 with cyclic variational autoencoder and parallel waveGAN. Tobing, Wu, Toda https://arxiv.org/pdf/2010.04429.pdf
Dirichlet graph variational autoencoder. Li, Yu, Li, Zhang, Zhao, Rong, Cheng, Huang https://arxiv.org/pdf/2010.04408.pdf
Condition-transforming variational autoencoder for generating diverse short text conversations. Ruan, Ling, Zhu https://dl.acm.org/doi/abs/10.1145/3402884
A variational autoencoder for music generation controlled by tonal tension. Guo, Simpson, Magnusson, Kiefer, Herremans https://arxiv.org/pdf/2010.06230.pdf
Adaptive neural speech enhancement with a denoising variational autoencoder. Bando, Sekiguchi, Yoshii https://indico2.conference4me.psnc.pl/event/35/contributions/3960/attachments/1008/1049/Wed-1-11-6.pdf
Semi-supervised self-produced speech enhancement and suppression based on joint souce modeling of air- and body-conducted signals using variational autoencoder. Seki, Takada, Toda https://indico2.conference4me.psnc.pl/event/35/contributions/3047/attachments/797/835/Thu-2-1-3.pdf
A coding perspective on deep latent variable models. Ullrich https://dare.uva.nl/search?identifier=2d6e0b96-90d3-4683-bbbe-00d2a7f1dd54
Probabilistic character motion synthesis using a hierarchical deep latent variable model. Ghorbani, Wloka, Etemad, Brubaker, Troje http://diglib.eg.org/handle/10.1111/cgf14116
Neighborhood-aware autoencoder for missing value imputation Aidos, Tomas https://www.eurasip.org/Proceedings/Eusipco/Eusipco2020/pdfs/0001542.pdf
Learning disentangled representations with the Wasserstein autoencoder. Gaujac, Feige, Barber https://arxiv.org/pdf/2010.03459.pdf
Learning deep-latent hierarchies by stacking wasserstein autoencoders. Gaujac, Feige, Barber https://arxiv.org/pdf/2010.03467.pdf
Discriminative mixture variational autoencoder for semisupervised classification. Chen, Du, Liao https://pubmed.ncbi.nlm.nih.gov/33027027/
Complex-valued variational autoencoder: a novel generative model for direct representation of complex spectra. Nakashika https://indico2.conference4me.psnc.pl/event/35/contributions/3042/attachments/446/471/Wed-1-3-3.pdf
Sequential learning and regularization in variational recurrent autoencoder. Chien, Tsai https://www.eurasip.org/Proceedings/Eusipco/Eusipco2020/pdfs/0001613.pdf
NCP-VAE: Variational autoencoders with noise contrastive priors. Aneja, Schwing, Kautz, Vahdat https://arxiv.org/pdf/2010.02917.pdf
LEGAN: disentangled manipulation of directional lighting and facial expressions by leveraging human perceptual judgements. Banerjee, Joshi, Majanajan, Bhattacharya, Kyal, Mishra https://arxiv.org/pdf/2010.01464.pdf
Unsupervised hierarchical concept learning. Roychowdhury, Sontakke, Puri, Sarkar, Aggarwal, Badjatiya, Krishnamurthy https://arxiv.org/pdf/2010.02556.pdf
Disentangled generative causal representation learning. Shen, Liu, Dong, Lian, Chen, Zhang https://arxiv.org/pdf/2010.02637.pdf
Disentangled representations for sequence data using information bottleneck principle. Yamada, Kim, Miyoshi, Iwata, Yamakawa http://proceedings.mlr.press/v129/yamada20a/yamada20a.pdf
Unbiased gradient estimation for variational auto-encoders using coupled Markov chains. Ruiz, Titsias, Cemgil, Doucet https://arxiv.org/pdf/2010.01845.pdf
RG-flow: a hierarhcical and explainable flow model based on renormalization group and sparse prior . Hu, Wu, You, Olshausen, Chen https://arxiv.org/pdf/2010.00029.pdf
Targeted VAE: structured inference and targeted learning for causal parameter estimation. Vowels, Camgoz, Bowden https://arxiv.org/pdf/2009.13472.pdf
Amortized mixture prior for variational sequence generation. Chien, Tsai https://ieeexplore.ieee.org/abstract/document/9206667
Collective dynamics of repeated inference in variational autoencoder rapidly find cluster structure. Naano, Karakida, Okada https://www.nature.com/articles/s41598-020-72593-4
Physics-constrained predictive molecular latent space discovery with graph scattering variational autoencoder. Shervani-Tabar, Zabaras https://arxiv.org/pdf/2009.13878.pdf
Hierarchical sparse variational autoencoder for text encoding. Prokhovov, Li, Shareghi, Collier https://arxiv.org/pdf/2009.12421.pdf
Discrete memory addressing variational autoencoder for visual concept learning. Min, Su, Zhu, Zhang https://ieeexplore.ieee.org/abstract/document/9206745/
Embedding and generation of indoor climbing routes with variational autoencoder. Lo https://arxiv.org/pdf/2009.13271.pdf
Semi-supervised deep learning in motor imagery-based brain-computer interfaces with stacked variational autoencoder. Chen, Yu, Gu https://iopscience.iop.org/article/10.1088/1742-6596/1631/1/012007/pdf
A dimensionalty reduction algorithm for mapping tokamak operation regimes using variational autoencoder neural network. Wei, brooks, chandra, levesque https://meetings.aps.org/Meeting/DPP20/Session/NP16.7
Multi-adversarial variational autoencoder nets for simultaneous image generation and classification. Imran, Terzopoulos https://link.springer.com/chapter/10.1007/978-981-15-6759-9_11
VAE-BRIDGE: variational autoencoder filter for Bayesian ridge imputation of missing data. Pereira, Abreu, Rodrigues https://www.researchgate.net/profile/Ricardo_Cardoso_Pereira/publication/342513773_VAE-BRIDGE_Variational_Autoencoder_Filter_for_Bayesian_Ridge_Imputation_of_Missing_Data/links/5f352e1b92851cd302f16ca5/VAE-BRIDGE-Variational-Autoencoder-Filter-for-Bayesian-Ridge-Imputation-of-Missing-Data.pdf
Variational online learning of neural dynamics. Zhao, Park https://openreview.net/pdf/9cae7375baff24b407ed87f731912eb212015301.pdf
Improving robustness and generality of NLP models using disentangled representations. Wu, Li, Ao, Meng, Wu, Li https://arxiv.org/pdf/2009.09587.pdf
A robust image watermarking approach using cycle variational autoencoder. Wei, Wang, Zhang https://www.hindawi.com/journals/scn/2020/8869096/
RVAE-ABFA: robust anomaly detection for high dimensional data using variational autoencoder. Gao, Shi, Dong, Chen, Mi, Huang, Shi https://ieeexplore.ieee.org/abstract/document/9202465/
Variational autoencoding dialogue sub-structures using a novel hierarchical annotation scheme. Tewari, Persiani, Umea https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1469903&dswid=-2871
dynamicVAE: decoupling reconstruction error and disentangled representation learning. Shao, Lin, Yang, Yao, Zhao, Abdelzaher https://arxiv.org/pdf/2009.06795.pdf
Deep transparent prediction through latent representation analysis. Kollias, Bouas, Vlaxos, Brillakis, Seferis, Kollia et al https://arxiv.org/pdf/2009.07044.pdf
Interpretable operational risk classification with semi-supervised variational autoencoder. Fan, Zhang, Yang https://repository.ust.hk/ir/Record/1783.1-104743
Content-collaborative disentanglement representation learning for enhances recommendation. Zhang, Zhu, Caverlee http://people.tamu.edu/~zhan13679/Paper/content-collaborative-dsentanglement.pdf
Optimized k-means clustering algorithm using an intelligent stable-plastic variational autoencoder with self-intrinsic cluster validation mechanism. Gikera, Mambo, Mwaura https://dl.acm.org/doi/abs/10.1145/3415088.3415125
Identifying treatment effects under unobserved confounding by causal representation learning. Anonymous https://openreview.net/forum?id=D3TNqCspFpM
Unsupervised discovery of interpretable latent manipulations in language VAEs . Anonymous https://openreview.net/pdf?id=DGttsPh502x
VideoGen: Generative modeling of videos using VQ-VAE and transformers. Anonymous https://openreview.net/forum?id=3InxcRQsYLf
Goal-conditioned variational autoencoder trajectory primatives with continuous and discrete latent codes. Osa, Ikemoto https://link.springer.com/article/10.1007/s42979-020-00324-7
Self-supervised disentanglement of modality-specific and shared factors improves multimodal generative models. Daunhawer, Sutter, Marcinkevics, Vogt https://mds.inf.ethz.ch/fileadmin/user_upload/gcpr_100_v01.pdf
Decoupling representation learning from reinforcement learning . Stooke, Lee, Abbeel, Laskin https://arxiv.org/pdf/2009.08319.pdf
DCAVN: Cervical cancer prediction and classification using deep convolutional and variational autoencoder network. Khamparia, Gupta, Rodrigues, de Albuquerque https://link.springer.com/article/10.1007/s11042-020-09607-w
Learning sampling in financial statement audits using vector quantised variational autoencoder neural networks. Schreyer, Sattarov, Gierbl, Reimer, Borth https://www.alexandria.unisg.ch/260768/1/ICAIF_2020_finale.pdf
Multilinear latent conditioning for generating unseen attribute combinations. Georgopoulos, Chrysos, Pantic, Panagakis https://arxiv.org/pdf/2009.04075.pdf
Ordinal-content VAE: Isolating ordinal-valued content factors in deep latent variable models. Kim, Pavlovic https://arxiv.org/pdf/2009.03034.pdf
Quasi-symplectic Langevin variational autoencoder. Wang, Delingette https://arxiv.org/pdf/2009.01675.pdf
Trajectory prediction by using contextual LSTM based variational autoencoder. Cho, Cha https://www.koreascience.or.kr/article/CFKO202024664105425.page
Dynamical variational autoencoders: a comprehensive review. Girin, Leglaive, Bie, Diard, Hueber, Alameda-Pineda https://arxiv.org/pdf/2008.12595.pdf
Metrics for exposing the biases of content-style disentanglement. Liu, Thermos, Valvano, Chartsias, O'Neil, Tsaftaris https://arxiv.org/pdf/2008.12378.pdf
Speech source separation using variational autoencoder and bandpass filter. Do, Tran, Chau https://ieeexplore.ieee.org/abstract/document/9178274/
Variationals in variational autoencoders - a comparative evaluation. Wei, Garcia, El-Sayed, Peterson, Mahmood https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9171997
Variational information bottleneck for semi-supervised classification. Voloshynovskiy, Taran Kondah, Holotyak, Rezende https://www.mdpi.com/1099-4300/22/9/943
Conditional introspective variational autoencoder for image synthesis. Zheng, Cheng, Kang, Yao, Tian https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9172064
Deep generative models in inversion: a reiew and development of a new approach based on a variational autoencoder. Lopez-Alvis, Laloy, Nguyen, Hermans https://arxiv.org/pdf/2008.12056.pdf
Robust vision-based workout analysis using diversified deep latent variable model. Xiong, Berkovsky, Sharan, Liu, Coiera https://ieeexplore.ieee.org/abstract/document/9175454/
Variational autoencoders. Fleuret https://fleuret.org/ee559-draft/materials/ee559-slides-7-4-VAE.pdf
Disentangling multiple features in video sequences using Gaussian processes in variational autoencoders. Bhagat, Uppal, Yin, Lim https://arxiv.org/abs/2001.02408
Improved techniques for training score-based generative models. Song, Ermon https://arxiv.org/abs/2006.09011
Optimal variance control of the score function gradient estimator for importance weighted bounds. Lievin, Dittadi, Christensen, Winther https://arxiv.org/abs/2008.01998
Rewriting a deep generative model. Bau, Liu, Wang, Zhu, Torralba https://arxiv.org/pdf/2007.15646.pdf
SRFlow: learning the super-resolution space with normalizing flow. Lugmayr, Danelljan, Gool, Timofte http://de.arxiv.org/pdf/2006.14200/
Generalized energy based models. Arbe, Zhou, Gretton https://arxiv.org/abs/2003.05033
Variational autoencoder for anti-cancer drug response prediction. Xie, Dong, Jing, Ren https://arxiv.org/pdf/2008.09763.pdf
Unsupervised clustering through Gaussian mixture variational autoencoder with non-reparameterized variational inference and std annealing. Li, Zhao, Chen, Xu, Li, Pei https://netman.aiops.org/wp-content/uploads/2020/08/PID6423661.pdf
Toward discriminating and synthesizing motion traces using deep probabilistic generative models. Zhou, Liu, Zhang, Trajcevski https://ieeexplore.ieee.org/abstract/document/9165954
Generating in-between images through learned latent space representation using variational autoencoders. Cristovao, Nakada, Tanimura, Asoh https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9166477
xGAIL: explainable generative adversarial imitation learning for explainable human decision analysis . Pan, Huang, Li, Zhou, Juo https://dl.acm.org/doi/abs/10.1145/3394486.3403186
A survey on generative adversarial networks for imbalance problems in computer vision tasks. Sampath, Maurtua, Martin, Gutierrez https://assets.researchsquare.com/files/rs-45616/v1_stamped.pdf
Linear disentangled representations and unsupervised action estimation. Painter, Hare, Prugel-Bennett https://arxiv.org/pdf/2008.07922.pdf
Learning interpretable representation for controllable polyphonic music generation. Wang, Wang, Zhang, Xia https://arxiv.org/pdf/2008.07122.pdf
Disentangled item representation for recommender systems. Cui, Yu, Wu, Liu, Wang https://arxiv.org/pdf/2008.07178.pdf
Joint variational autoencoders for recommendation with implicit feedback. Askari, Szlichta, Salehi-Abari https://arxiv.org/pdf/2008.07577.pdf
Transferred discrepancy: quantifying the difference between representations. Feng, Zhai, He, Wang, Dong https://arxiv.org/pdf/2007.12446.pdf
What should not be contrastive in contrastive learning. Xiao, Wang, Efros, Darrell https://arxiv.org/pdf/2008.05659.pdf
SCAN: learning to classify images without labels. Gansbeke, Vandehende, Georgoulis, Proesmans, Gool http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550273.pdf
SG-VAE: scene grammar variational autoencoder to generative new indoor scenes. Purkait, Zach, Reid http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123690154.pdf
Unsupervised domain adaptation in the wild via disentangling representation learning. Li, Wan, Wang, Kot https://link.springer.com/article/10.1007/s11263-020-01364-5
Variational autoencoder for generation of antimicrobial peptides. Dean, Walper https://pubs.acs.org/doi/full/10.1021/acsomega.0c00442
Multimodal deep generative models for trajectory prediction: a conditional variational autoencoder approach. Ivanovic, Leung, Schmerling, Pavone https://arxiv.org/pdf/2008.03880.pdf
A conditional variational autoencoder algorithm for reconstructing defect data of magnetic flux leakage. Lu, Wu, Zhang https://ieeexplore.ieee.org/abstract/document/9164107/
CRUDS: Counterfactual recourse using disentangled subspaces. Downs, Chu, Yacoby, Doshi-Velez, Pan https://finale.seas.harvard.edu/files/finale/files/cruds-_counterfactual_recourse_using_disentangled_subspaces.pdf
Using deep variational autoencoder networks for recognizing geochemical anomalies. Luo, Xiong, Zuo https://www.sciencedirect.com/science/article/abs/pii/S088329272030202X
SeCo: exploring sequence supervision for unsupervised representation learning. Yao, Zhang, Qiu, Pan, Mei https://arxiv.org/pdf/2008.00975.pdf
LoCo: local contrastive representation learning. Xiong, Ren, Urtasum https://arxiv.org/pdf/2008.01342.pdf
Geometrically enriched latent spaces. Arvanitidis, Hauberg, Scholkopf https://arxiv.org/pdf/2008.00565.pdf
PDE-driven spatiotemporal disentanglement. Dona, Franceschi, Lamprier, Gallinari https://arxiv.org/pdf/2008.01352.pdf
Dynamics generalization via information bottleneck in deep reinforcement learning. Lu, Lee, Abbeel, Tiomkin https://arxiv.org/pdf/2008.00614.pdf
Semi-supervised adversarial variational autoencoder . Zemouri https://www.preprints.org/manuscript/202008.0051/v1
Improving sample quality by training and sampling from latent energy. Xiao, Yan, Amit https://invertibleworkshop.github.io/accepted_papers/pdfs/4.pdf
Quantitative understanding of VAE by interpreting ELBO as rate distortion cost of transform coding. Nakagawa, Kato https://arxiv.org/pdf/2007.15190.pdf
dMELODIES: a musuc dataset for disentanglement learning. Oati, Gururani, Lerch https://arxiv.org/pdf/2007.15067.pdf
Privacy-preserving voice analysis via disentangled representations. Aloufi, Haddadi, Boyle https://arxiv.org/pdf/2007.15064.pdf
Approximation based variance reduction for reparameterization gradients. Geffner, Domke https://arxiv.org/pdf/2007.14634.pdf
Online variational learning of dirichlet process mixtures of scaled dirichlet distributions. Manouchehri, Nguyen, Koochemeshkian, Bouguila, Fan https://link.springer.com/article/10.1007/s10796-020-10027-2
A commentary on the unsupervised learning of disentangled representations. Locatello, Bauer, Lucic, Ratsch, Gelly, Scholkopf, Bachem https://arxiv.org/pdf/2007.14184.pdf
Learning disentangled representations with latent variation predictability. Zhu, Xu, Tao https://arxiv.org/pdf/2007.12885.pdf
TDAE: autoencoder-based automatic feature learning method for the detection of DNS tunnel. Wu, Zhang, Yin https://ieeexplore.ieee.org/abstract/document/9149162/
A variational autoencoder mixture model for online behavior recommendation. Nguyen, Cho https://ieeexplore.ieee.org/document/9144583/?denied=
Towards nonlinear disentanglement in natural data with temporal sparse coding. Klindt, Schott, Sharma, et al. https://arxiv.org/pdf/2007.10930.pdf
Improving generative modelling in VAEs using multimodal prior. Abrol, Sharma, Patra https://ora.ox.ac.uk/objects/uuid:d4a7306e-6d5c-4e84-9525-4363723328f8 Generative flows with matrix exponential Xiao, Liu https://arxiv.org/pdf/2007.09651.pdf
Undirected graphical models as approximate posteriors. Vahdat, Andriyash, Macready https://proceedings.icml.cc/static/paper_files/icml/2020/1354-Paper.pdf
DMRAE: discriminative manifold regularized autoencoder for sparse and robust feature learning. Farajian, Adibi https://link.springer.com/article/10.1007/s13748-020-00211-5
Variational Bayesian quantization. Yang, Bamler, Mandt https://proceedings.icml.cc/static/paper_files/icml/2020/6168-Paper.pdf
Dispersed exponential family mixture VAEs for interpretable text generation. Shi, Zhou, Miao, Li https://proceedings.icml.cc/static/paper_files/icml/2020/3242-Paper.pdf
Empirical study of the benefits of overparameterization in learning latent variable models. Buhai, Halpern, Kim, Risteksi, Sontag https://proceedings.icml.cc/static/paper_files/icml/2020/5645-Paper.pdf
Relaxed-responsibility hierarchical discrete VAEs. Willetts,Miscouridou, Roberts, Holmes https://arxiv.org/pdf/2007.07307.pdf
Deep generative video compression with temporal autoregressive transforms. Yang, Yang, Marino, Yang, Mandt https://joelouismarino.github.io/files/papers/2020/seq_flows_compression/seq_flows_compression.pdf
Learning invariances for interpretability using supervised VAE Nguyen, Martinez https://arxiv.org/pdf/2007.07591.pdf
Towards a theoretical understanding of the robustness of variational autoencoders. Camuto, Willetts, Roberts, Holmes, Rainforth https://arxiv.org/pdf/2007.07365.pdf
Hierarchical linear disentanglement of data-driven conceptual spaces. Alshaikh, Bouraoui, Schockaert https://www.ijcai.org/Proceedings/2020/0494.pdf
Distribution augmentation for generative modeling. Jun, Child, Chen, Schulman, Ramesh, Radford, Sutskever https://proceedings.icml.cc/static/paper_files/icml/2020/6095-Paper.pdf
Deep heterogeneous autoencoder for subspace clustering of sequential data. Siddique, Mozhdehi, Medeiros https://arxiv.org/pdf/2007.07175.pdf
Self-reflective variational autoencoder. Apostolopoulou, Rosenfeld, Dubrawksi https://arxiv.org/pdf/2007.05166.pdf
Disentangled variational autoencoder based multi-label classification with covariance-aware multivariate probit model. Bai, Kong, Gomes https://arxiv.org/pdf/2007.06126.pdf
InfoGAN-CR and ModelCentrality: self-supervised model training and selection for disentangling GANs. Lin, Thekumparampil, Fanti, Oh https://proceedings.icml.cc/static/paper_files/icml/2020/4410-Paper.pdf
Reconstruction bottlenecks in object-centric generative models. Engelcke, Jones, Posner https://arxiv.org/pdf/2007.06245.pdf
Variational learning of Bayesian neural networks via Bayesian dark knowledge. Shen, Chen, Deng https://www.researchgate.net/profile/Zhi-Hong_Deng2/publication/342798883_Variational_Learning_of_Bayesian_Neural_Networks_via_Bayesian_Dark_Knowledge/links/5f0db9a5a6fdcc3ed7056bb0/Variational-Learning-of-Bayesian-Neural-Networks-via-Bayesian-Dark-Knowledge.pdf
A look inside the black-box: towards the interpretability of conditioned variational autoencoder for collaborative filtering. Carraro, Polato, Aiolli https://dl.acm.org/doi/abs/10.1145/3386392.3399305
Topologically-based variational autoencoder for time series classification. Rivera-Castro, Moustafa, Pilyugina, Burnaev https://www.latinxinai.org/assets/pdf/icml2020/all_posters/pdf/Poster_Rodrigo_Rivera.pdf
Modeling and interpreting road geometry from a driver's perspective using variational autoencoders. Wang, Chen, Wijnands, Guo https://onlinelibrary.wiley.com/doi/abs/10.1111/mice.12594
Do compressed representations generalize better? Hafez-Kolahi, Kasaei, Soleymani-Baghshah https://www.researchgate.net/profile/Mahdieh_Soleymani/publication/335989891_Do_Compressed_Representations_Generalize_Better/links/5ed54369299bf1c67d32500f/Do-Compressed-Representations-Generalize-Better.pdf
PRI-VAE: principle-of-relevant-information variational autoencoders. Li, Yu, Principe, Li, Wu https://arxiv.org/abs/2007.06503
Latent variable modelling with hyperbolic nomalizing flows. Bose, Smofsky, Liao, Panangaden, Hamilton. https://arxiv.org/abs/2002.06336
Object-centric learning with slot attention. Locatello, Weissenborn, Unterthiner, Mahendran, Heigold, Uszkoreit, Dosovitskiy, Kipf https://arxiv.org/abs/2006.15055
NVAE: A deep hierarchical variational autoencoder. Vahadat, Kautz https://arxiv.org/abs/2007.03898
Variational inference for sequential data with future likelihood estimates. Kim, Jang, Yang, Kim http://ailab.kaist.ac.kr/papers/pdfs/KJYK2020.pdf
Exponential tilting of generative models: improving sample quality by training and sampling from latent energy. Xiao, Yan, Amit https://arxiv.org/pdf/2006.08100.pdf
Hierarchical path VAE-GAN: generating diverse videos from a single sample. Gur, Benaim, Wolf https://arxiv.org/pdf/2006.12226.pdf
Contrastive code representations learning. Jain, Jain, Zhang, Abbeel, Gonzalez, Stoica https://arxiv.org/pdf/2007.04973.pdf
Efficiet learning of generative models via finite-difference score matching. Pang, Xu, Li, Song, Ermon, Zhu https://arxiv.org/pdf/2007.03317.pdf
Towards recurrent autoregressive flow models. Mern, Morales, Kochenderfer https://arxiv.org/pdf/2006.10096.pdf
Benefiting deep latent variable models via learning the prior and removing latent regularization. Morrow, Chiu https://arxiv.org/pdf/2007.03640.pdf
VAEs in the presence of missing data Collier, Nazabal, Williams https://arxiv.org/pdf/2006.05301.pdf
Mixture of discrete normalizing flows for variational inference. Kusmierczyk, Klami https://arxiv.org/pdf/2006.15568.pdf
Spatial revising variational autoencoder-based feature extraction method for hyperspectral images. Yu, Zhang, Shen https://ieeexplore.ieee.org/abstract/document/9109663/
Monitoring of nonlinear processes with multiple operating modes through a novel Gaussian mixture variational autoencoder model. Tang, Peng, Dong, Zhang, Zhao https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9119397
A new approach for smoking event detection using a variational autoencoder and neural decision forest. Fan, Gao https://ieeexplore.ieee.org/abstract/document/9129702/
Isometric Gaussian process latent variable model for dissimilarity data. Jorgensen, Hauberg https://arxiv.org/pdf/2006.11741.pdf
VAEM: a deep generative model for heterogeneous mixed type data. Ma, Tschiatschek, Hernandez-Lobato, Turner https://arxiv.org/pdf/2006.11941.pdf
Disentangling by subspace diffusion. Pfau, Higgins, Botev, Racaniere https://arxiv.org/pdf/2006.12982.pdf
Latent variable modeling with random features. Gundersen, Zhang, Engelhardt https://arxiv.org/pdf/2006.11145.pdf
Variational orthogonal features. Burt, Rasmussen, van der Wilk https://arxiv.org/pdf/2006.13170.pdf
Scale-space autoencoders for unsupervised anomaly segmentations in brain MRI. Bauer, Wistler, Albarquouni, Navab https://arxiv.org/pdf/2006.12852.pdf
Learning from demonstration with weakly supervised disentanglement. Hristov, Ramamoorthy. https://arxiv.org/pdf/2006.09107.pdf
A tutorial on VAES: from Bayes' rule to lossless compression. Yu https://arxiv.org/pdf/2006.10273.pdf
Density deconvolution with normalizing flows. Dockhorn, Ritchie, Yu, Murray https://arxiv.org/pdf/2006.09396.pdf
Rethinking sem-supervised learning in VAEs. Joy, Schmon, Torr, Siddharth, Rainforth https://arxiv.org/pdf/2006.10102.pdf
DisARM: An antithetic gradient estimator for binary latent variables. Dong, Mnih, Tucker https://arxiv.org/pdf/2006.10680.pdf
On casting importance weighted autoencoder to an EM algorithm to learn deep generative models. Kim, Hwang, Kim http://proceedings.mlr.press/v108/kim20b/kim20b.pdf
Sparsity enforcement on latent variables for better disentanglement in VAE. Crsitovao, Nakada, Tanimura, Asoh https://www.jstage.jst.go.jp/article/pjsai/JSAI2020/0/JSAI2020_2K6ES202/_pdf
Isometric autoencoders. Atzmon, Gropp, Lipman https://arxiv.org/pdf/2006.09289.pdf
Constraining variational inference with geometric Jensen-Shannon divergence. Deasy, Simidjievski, Lio https://arxiv.org/pdf/2006.10599.pdf
Neural decomposition: functional ANOVA with variational autoencoders. Martens, Yau http://proceedings.mlr.press/v108/martens20a/martens20a.pdf
Variational autoencoder with learned latent structure. Connor, Canal, Rozell https://arxiv.org/pdf/2006.10597.pdf
Transfer learning approach for botnet detection based on recurrent variational autoencoder. Kim , Sim, Kim, Wu, Hahm https://dl.acm.org/doi/abs/10.1145/3391812.3396273
Anomaly-based intrusion detection from network flow features using variational autoencoder. Zavrak, Iskefiyeli https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9113298
Longitudinal variational autoencoder. Racmchandran, Tikhonov, Koskinen, Lahdesmaki https://arxiv.org/pdf/2006.09763.pdf
Gaussian mixture variational atueoncoder for semi-supervised topic modeling. Zhou, Ban, Zhang, Li, Zhang https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9112154
Structural autoencoders improve representations for generation and transfer. Leeb, Annadani, Bauer, Scholkopf https://arxiv.org/pdf/2006.07796.pdf
High-dimensional similarity search with quantum-assisted variational autoencoder. Gao, Wilson, Vandal, Vinci, Nemani, Rieffel https://arxiv.org/pdf/2006.07680.pdf
Robust variational autoencoder for tabular data with beta divergence . Akrami, Aydore, Leahy, Joshi https://arxiv.org/pdf/2006.08204.pdf
Evidence-aware inferential text generation with vector quantised variational autoencoder. Guo, Tang, Duan, Yin, Jiang, Zhou https://arxiv.org/pdf/2006.08101.pdf
LaRVAE: label replacement VAE for semi-supervised disentanglement learning. Nie, Wang, Patel, Baraniuk https://arxiv.org/pdf/2006.07460.pdf
AR-DAE: towards unbiased neural entropy gradient estimation. Lim, Courville, Pal, Huang https://arxiv.org/pdf/2006.05164.pdf
Learning latent space energy-based prior model. Pang, Han, Nijkamp, Zhu, Wu https://arxiv.org/pdf/2006.08205.pdf
Disentanglement for discriminative visual recognition. Liu https://arxiv.org/pdf/2006.07810.pdf
Deep critiquing for VAE-based recommender systems. Luo, Yang, Wu, Sanner https://ssanner.github.io/papers/sigir20_cevae.pdf
To regularize or not to regularize? The bias variance trade-off in regularized VAEs. Mondal, Asnani, Singla https://arxiv.org/pdf/2006.05838.pdf
DisCont: self-supervised visual attribute disentanglement using context vectors. Bhagat, Udandarao, Uppal https://arxiv.org/pdf/2006.05895.pdf
Interpretable deep graph generation with node-edge co-disentanglement. Guo, Zhao, Qin, Wu, Shehu, Ye https://arxiv.org/pdf/2006.05385.pdf
Output-relevant variational autoencoder for just-in-time soft sensor modeling with missing data. Guo, Bai, Huang https://www.sciencedirect.com/science/article/pii/S0959152420302195
Deep variational autoencoder: an efficient tool for PHM frameworks. Zemouri, Levesque, Amyot, Hudon, Kokoko https://ieeexplore.ieee.org/abstract/document/9115491/
Model extraction defence using modified variational autoencoder. Gupta http://etd.iisc.ac.in/handle/2005/4430
Variational variance: simple and reliable predictive variance parameterization. Stirn, Knowles https://arxiv.org/pdf/2006.04910.pdf
Probabilistic autoencoder. Bohm, Seljak https://arxiv.org/pdf/2006.05479.pdf
Deep latent-variable models for natural language understanding and generation. Shen https://dukespace.lib.duke.edu/dspace/bitstream/handle/10161/20848/Shen_duke_0066D_15473.pdf?sequence=1
Generalization via information bottleneck in deep reinforcement learning. Lu, Tiomkin, Abbeel https://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-56.pdf
Optimal configuration of concentrating solar power in multienergy power systems with an improved variational autoencoder. Qi, Hu, Dong, Fan, Dong, Xiao https://www.sciencedirect.com/science/article/abs/pii/S030626192030636X
tvGP-VAE: tensor-variate gaussian process prior variational autoencoder. Campbell, Lio https://arxiv.org/pdf/2006.04788.pdf
OC-FakeDect: classifying deepfakes using one-class variational autoencoder. Khalid, Woo http://openaccess.thecvf.com/content_CVPRW_2020/papers/w39/Khalid_OC-FakeDect_Classifying_Deepfakes_Using_One-Class_Variational_Autoencoder_CVPRW_2020_paper.pdf
Tuning a variational autoencoder for data accountability problem in the Mars science laboratory ground data system. Lakhmiri, Alimo, Le Digabel https://arxiv.org/pdf/2006.03962.pdf
Generate high fidelity images with generative variational autoencoder. Sagar https://d1wqtxts1xzle7.cloudfront.net/63577040/GVAE20200609-33737-2ojfbd.pdf?1591724283=&response-content-disposition=inline%3B+filename%3DGenerate_High_Fidelity_Images_With_Gener.pdf&Expires=1592933681&Signature=NI4uAK8CTTGPoWx-KYkCl5giVzyEfhUsIkGh4lM4bSTXmWOc-oCX4T~gX5x2HB4gJVX4ZtZy8qghJf7qGJ2GSrP~89PMb1dzX3KTyMUbWRvK1InS28wuc86KMEanX7gj7Tu0IrwMoRLjpdZZnc7Jt00Ga9A1N79n8MNj4fdeRFkZE5h8BgUTY9u11zN4pVSj~Rz3clsb~RIJldCmSZ3np31Qo8RAnVWap9MMJMoYWPq8EnBJ367G3ip~mSHh1lDZLGRCuVupWLxIzF1q4SAWfLvG75~CTacPvQneelwUQnTRwf93H9FRw7FpbrbpuJrOu-7tcZJdowAIRsDh-EVHEg__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA
PuppeteerGAN: arbitrary portrait animation with semantic-aware appearance transformation. Chen, Wang, yuan, Tao http://openaccess.thecvf.com/content_CVPR_2020/papers/Chen_PuppeteerGAN_Arbitrary_Portrait_Animation_With_Semantic-Aware_Appearance_Transformation_CVPR_2020_paper.pdf
Joint training of variational auto-encoder and latent energy-based model. Han, Nijkamp, Zhou, Pang, Zhu, Wu http://openaccess.thecvf.com/content_CVPR_2020/papers/Han_Joint_Training_of_Variational_Auto-Encoder_and_Latent_Energy-Based_Model_CVPR_2020_paper.pdf
Feature-based generative design of mechanisms with a variational autoencoder. Brandt https://search.proquest.com/docview/2408896683?pq-origsite=gscholar&fromopenview=true
Denoising diffusion probabilistic models. Ho, Jain, Abbeel https://arxiv.org/pdf/2006.11239.pdf
Simple and effective VAE training with calibrated decoders. Rybkin, Daniilidis, Levine https://arxiv.org/abs/2006.13202
SurVAE Flows: surjections to bridge the gap between VAEs and flows. Nielsen, Jaini, Hoogeboom, Winther, Welling https://arxiv.org/pdf/2007.02731.pdf
Mutual information gradient estimation for representation learning. Wen, Zhou, He, Zhou, Xu https://arxiv.org/pdf/2005.01123.pdf
Cross-VAE: towards disentangling expression from identity for human faces. Wu, Jia, Xie, Qi, Shi, Tian https://ieeexplore.ieee.org/abstract/document/9053608
CONFIG: controllable neural face image generation. Kowalski, Garbin, Estellers, Baltrusaitis, Johnson, Shotton https://arxiv.org/abs/2005.02671
Variance constrained autoencoding. Braithwaite, O'Connor, Kleijn https://arxiv.org/pdf/2005.03807.pdf
Jigsaw-VAE: towards balancing features in variational autoencoders. Taghanaki, Havaei, Lamb, Sanghi https://arxiv.org/pdf/2005.05496.pdf
The usefulness of the deep learning method of variational autoencoder to reduce measurement noise in Glaucomatous visual fields. Asaoka, Murata, Asano, Matsuura, Fujino et al. https://www.nature.com/articles/s41598-020-64869-6
methCancer-gen: a DNA methylome dataset generator for user-specified cancer type based on conditional variational autoencoder. Choi, Chae https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-3516-8
Deep latent variable model for longitudinal group factor analysis. Qiu, Chinchilli, Lin https://arxiv.org/pdf/2005.05210.pdf
Prototypical contrastive learning of unsupervised representations. Li, Zhou, Xiong, Socher, Hoi https://arxiv.org/pdf/2005.04966.pdf
A semi-supervised approach for identifying abnormal heart sounds using variational autoencoder. Banerjee, Ghose https://ieeexplore.ieee.org/abstract/document/9054632
Semi-supervised neural chord estimation based on a variational autoencoder with discrete labels and continuous textures of chords. Wu, Carsault, Nakamura, Yoshii https://arxiv.org/pdf/2005.07091.pdf
A deeper look at the unsupervised learning of disentangled representations in beta-VAE from the perspective of core object recognition. Sikka https://arxiv.org/pdf/2005.07114.pdf
Many-to-many voice conversion using cycle-consistent variational autoencoder with multiple decoders/ Yook, Leem, Lee, Yoo https://www.isca-speech.org/archive/Odyssey_2020/pdfs/32.pdf
HyperVAE: a minimum description length variational hyper-encoding network. Nguyen, Tran, Gupta, Rana, Dam, Venkatesh https://arxiv.org/pdf/2005.08482.pdf
Disentangling in latent space by harnessing a pretrained generator. Nitzan, Bermano, Li, Cohen-Or https://arxiv.org/pdf/2005.07728.pdf
Attention mechanism for human motion prediction. Al-aqel, Khan https://ieeexplore.ieee.org/abstract/document/9096777
Brain lesion detection using a robust variational autoencoder and transfer learning. Akrami, Joshi, Li, Aydore, Leahy https://ieeexplore.ieee.org/abstract/document/9098405
Deep variational autoencoder for modeling functional brain networks and ADHD identification. Qiang, Dong, Sun, Ge, Liu https://ieeexplore.ieee.org/abstract/document/9098480
Dual autoencoders generative adversarial network for imbalanced classification problem. Wu, Cui, Welsch https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9093005
Pairwise supervised hashing with bernoulli variational auto-encoder and self-control gradient estimator. Dadaneh, Boluki, Yin, Zhou, Qian https://arxiv.org/pdf/2005.10477.pdf
S3VAE: self-supervised sequential VAE for representation disentanglement and data generation. Zhu, Min, Kadav, Graf, https://arxiv.org/pdf/2005.11437.pdf
VMI-VAE: variational mutual information maximization framework for VAE with discrete and continuous priors. Serdega, Kim https://arxiv.org/pdf/2005.13953.pdf
Variational autoencoder with embedded student-t mixture model for authorship attribution. Boenninghoff, Zeiler, Nickel, Kolossa https://arxiv.org/pdf/2005.13930.pdf
Deep learning on the 2-dimensional ising model to extract the crossover region with a variational autoencoder. Walker, Tam, Jarrell https://arxiv.org/pdf/2005.13742.pdf
Context-dependent token-wise variational autoencoder for topic modeling. Masada High-fidelity audio generation and representation learning with guided adversarial autoencoder. Haque, Rana, Schuller https://arxiv.org/pdf/2006.00877.pdf
Adaptive efficient coding: a variational auto-encoder approach. Aridor, Grechi, Woodford https://www.biorxiv.org/content/biorxiv/early/2020/05/31/2020.05.29.124453.full.pdf
Noise-to-compression variational autoencoder for efficient end-to-end optimized image coding. Luo, Li, Dai, Xu, Cheng, Li, Xiong https://ieeexplore.ieee.org/abstract/document/9105715
Guided image generation with conditional invertible neural networks. Ardizzone, Luth, Kruse, Rother, Kothe https://arxiv.org/pdf/1907.02392.pdf
Vector quantization-based regularization for autoencoders . Wu, Flierl https://arxiv.org/abs/1905.11062 https://github.com/AlbertOh90/Soft-VQ-VAE
MHVAE: a human-inspired deep hierarchical generative model for multimodal representations learning. Vasco, Melo, Paiva https://arxiv.org/pdf/2006.02991.pdf
NewtonianVAE: proportional control and goal identification from pixels via physical latent spaces. Jaques, Burke, Hospedales https://arxiv.org/pdf/2006.01959.pdf
Constrained variational autoencoder for improving EEG based speech recognition systems. Krishna, Tran, Carnahan, Tewfik https://arxiv.org/pdf/2006.02902.pdf
Variational mutual information maximization framework for VAE latent codes with continuous and discrete priors. Serdega https://arxiv.org/pdf/2006.02227.pdf
Monitoring and prediction of big process data with deep latent variable models and parallel computing. Yang, Ge https://www.sciencedirect.com/science/article/pii/S0959152420302171
Polarized-VAE: proximity based disentangled representation learning for text generation. Balasubramanian, Kobyzev, Bahuleyan, Shapiro, Vechtomova https://arxiv.org/pdf/2004.10809.pdf
Discretized bottleneck: posterior-collapse-free sequence-to-sequence learning. Zhao, Yu, Mahapatra, Su, Chen https://arxiv.org/pdf/2004.10603.pdf
Remote sensing image captioning via Variational Autoencoder and Reinforcement learning. Shen, Liu, Zhou, Zhao, Liu https://www.sciencedirect.com/science/article/abs/pii/S0950705120302586
Conditioned variational autoencoder for top-N item recommendation Polato, Carraro, Aiolli. https://arxiv.org/pdf/2004.11141.pdf
Multi-speaker and multi-domain emotional voice conversion using factorized hierarchical variational autoencoder. Elgaar, Park, Lee https://ieeexplore.ieee.org/abstract/document/9054534
beta-variational autoencoder as an entanglement classifier. Sa, Roditi https://arxiv.org/pdf/2004.14420.pdf
Preventing posterior collapse with Levenshtein variational autoencoder. Havrylov, Titov https://arxiv.org/pdf/2004.14758.pdf
Multi-decoder RNN autoencoder based on variational Bayes method. Kaji, Watanabe, Kobayashi https://arxiv.org/pdf/2004.14016.pdf
Bootstrap latent-predictive representations for multitask reinforcement learning. Guo, Pries, Piot, Grill, Altche, Munoz, Azar https://arxiv.org/pdf/2004.14646.pdf
Anomaly detection of time series with smoothness-inducing sequential variational auto-encoder. Li, Yan, Wang, Jin https://ieeexplore.ieee.org/abstract/document/9064715
A batch normalized inference network keeps the KL vanishing away. Zhu, Bi, Liu, Ma, Li, Wu https://arxiv.org/pdf/2004.12585.pdf
From symbols to signals: symbolic variational autoencoders. Devaraj, Chowdhury, Jain, Kubricth, Tu, Santa https://ieeexplore.ieee.org/abstract/document/9054016
Unsupervised real image super-resolution via generative variational autoencoder. Liu, Sui, Wang, Li, Cani, Chan https://arxiv.org/pdf/2004.12811.pdf
Interpreting rate-distortion of variational autoencoder and using model uncertainty for anomaly detection. Park, Adosoglou, Pardalos https://arxiv.org/pdf/2005.01889.pdf
Computational representation of Chinese characters: comparison between Singular Value Decomposition and Variational Autoencoder. Tseng, Hsieh http://www.papersearch.net/thesis/article.asp?key=3766591
Curiosity-driver variational autoencoder for deep q network. Han, Zhang, Mao https://link.springer.com/chapter/10.1007/978-3-030-47426-3_59
6GCVAE: gated convolutional variational autoencoder for IPv6 Target Generation. Cui, Gou, Xiong https://link.springer.com/chapter/10.1007/978-3-030-47426-3_47
Text-based malicious domain names detection based on variational autoencoder and supervised learning. Sun, Chong, Ochiai https://ieeexplore.ieee.org/abstract/document/9086229/
Mutual information gradient estimation for representation learning. Wen, Zhou, He, Zhou, Xu https://arxiv.org/pdf/2005.01123.pdf
CausalVAE: structured causal disentanglement in variational autoencoder. Yang, Liu, Chen, Shen, Hao, Wang https://arxiv.org/pdf/2004.08697.pdf
Vroc: Variational autoencoder-aided multi-task rumor classifier based on text. Cheng, Nazarian, Bogdan https://dl.acm.org/doi/pdf/10.1145/3366423.3380054
On the encoder-decoder incompatibility in variational text modeling and beyond . Wu, Wang, Wang https://arxiv.org/pdf/2004.09189.pdf
Esimate the implicit likelihoods of GANs with application to anomaly detection. Ren, Li, Zhou, Li https://dl.acm.org/doi/pdf/10.1145/3366423.3380293
Emotional response generation using conditional variational autoencoder. Lee, Choi https://ieeexplore.ieee.org/abstract/document/9070547
PatchVAE: learning local latent codes for recognition. Gupta, Singh, Shrivastava https://arxiv.org/pdf/2004.03623.pdf
Generating tertiary protein structures via an interpretative variational autoencoder. Guo, Tadepalli, Zhao, Shehu https://arxiv.org/pdf/2004.07119.pdf
Attribute-based regularization of VAE latent spaces. Pati, Lerch https://arxiv.org/pdf/2004.05485.pdf
Controllable variational autoencoder. Shao, Yao, Sun, Zhang, Liu, Liu, Wang, Abdelzaher https://arxiv.org/pdf/2004.05988.pdf
Variational autoencoder-based dimensionality reduction for high-dimensional small-sample data classification. Mahmud, Huang, Fu https://www.worldscientific.com/doi/abs/10.1142/S1469026820500029
Normalizing flows with multi-scale autoreressive priors. Mahajan, Bhattacharyya, Fritz, Schiele, Roth https://arxiv.org/pdf/2004.03891.pdf
Adversarial latent autoencoders. Pidhorskyi, Adjeroh, Doretto https://arxiv.org/pdf/2004.04467.pdf OPTIMUS: organizing sentences via pre-trained modeling of latent space Li, Gao, Li, Li, Peng, Zhang, Gao https://arxiv.org/pdf/2004.04092.pdf
Learning discrete structured representations by adversarially maximizing mutual information. Stratos, Wiseman https://arxiv.org/pdf/2004.03991.pdf
AI giving back to statistics? Discovery of the coordinate system of univariate distributions by beta variational autoencoder. Glushkovsky https://arxiv.org/pdf/2004.02687.pdf
Towards democratizing music production with AI - design of variational autoencoder-based rhythm generator as a DAW plugin. Tokui https://arxiv.org/pdf/2004.01525.pdf
Decomposed adversarial learned inference. Li, Wang, Chen, Gao https://arxiv.org/abs/2004.10267
Fast NLP Model Pretraining with Vampire https://www.lighttag.io/blog/fast-nlp-pretraining-with-vampire/ - Blog post describing AllenAI work on use of VAEs to pre-train NLP models
A robust speaker clustering method based on discrete tied variational autoencoder. Feng, Wang, Li, Peng, Xiao https://arxiv.org/pdf/2003.01955.pdf
mmFall: Fall detection using 4D mmwave radar and variational recurrent autoencoder. Jin, Sengupta, Cao https://arxiv.org/pdf/2003.02386.pdf
Variational auto-encoders: not all failures are equal. Berger, Sebag https://arxiv.org/abs/2003.01972
Fully convolutional variational autoencoder for feature extraction of fire detection system. Hugroho, Susanty, Irawan, Koyimatu, Yunita Time-varying item feature conditional variational autoencoder for collaborative filtering. Kim https://ieeexplore.ieee.org/abstract/document/9006014
Multi-objective variational autoencoder: an application for smart infrastructure maintenance. Anaissi, Zandavi https://arxiv.org/pdf/2003.05070.pdf
Variational autoencoder with optimizing gaussian mixture model priors. Guo, Zhou, Chen, Ying, Zhang, Zhou, https://ieeexplore.ieee.org/abstract/document/9020116
Combining model predictive path integral with Kalman variational autoencoder for robot control from raw images. Kwon, Kaneko, Tsurumine, Sasaki, Motonaka, Miyoshi, Matsubara https://ieeexplore.ieee.org/abstract/document/9025842
Botnet detection using recurrent variational autoencoder. Kim, Sim, Kim, Wu https://arxiv.org/abs/2004.00234
A flow-based deep latent variable model for speech spectrogram modeling and enhancement Nugraha, Sekiguchi, Yoshii https://ieeexplore.ieee.org/abstract/document/9028147
A variational autoencoder with deep embedding model for generalized zero-shot learning Ma, Hu https://www.aaai.org/Papers/AAAI/2020GB/AAAI-MaP.2796.pdf
Continuous representation of molecules using graph variational autoencoder Tavakoli, Baldi http://ceur-ws.org/Vol-2587/article_12.pdf
IntroVNMT: an introspective model for variational neural machine translation Sheng, Xu, Guo, Liu, Zhao, Xu https://www.aaai.org/Papers/AAAI/2020GB/AAAI-ShengX.3632.pdf
Epitomic variational graph autoencoder Khan, Kleinsteuber https://arxiv.org/pdf/2004.01468.pdf
Variance loss in variational autoencoders Asperti https://arxiv.org/pdf/2002.09860.pdf
Dynamic narrowing of VAE bottlenecks using GECO and L0 regularization Boom, Wauthier, Verbelen, Dhoedt https://arxiv.org/pdf/2003.10901.pdf
q-VAE for disentangled representation learning and latent dynamical systems Koboyashi https://arxiv.org/pdf/2003.01852.pdf
Remaining useful life prediction via a variational autoencoder and a time-window-based sequence neural network Su, Li, Wen https://onlinelibrary.wiley.com/doi/epdf/10.1002/qre.2651
A lower bound for the ELBO of the Bernoulli variational autoencoder Sicks, Korn, Schwaar https://arxiv.org/pdf/2003.11830.pdf
VaB-AL: incorporating class imbalance and difficulty with variational Bayes for active learning Choi, Yi, Kim, Choo, Kim, Chang, Gwon, Chang https://arxiv.org/pdf/2003.11249.pdf
Inferring personalized and race-specific causal effecs of genomic aberrations on Gleason scores: a deep latent variable model Chen, Edwards, Hicks, Zhang https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7082760/
SCALOR: generative world models with scalable object representations Jiang, Janghorbani, Melo, Ahn https://pdfs.semanticscholar.org/02a0/1ed64f4a1bdfece5e1a83da5d9756397b0a1.pdf
Draft and Edit: Automatic Storytelling Through Multi-Pass Hierarchical Conditional Variational Autoencoder. Yu, Li, Liu, Tang, Zhang, Zhao, Yan https://www.aaai.org/Papers/AAAI/2020GB/AAAI-YuM.8133.pdf
Reverse variational autoencoder for visual attribute manipulation and anomaly detection. Gauerhof, Gu http://openaccess.thecvf.com/content_WACV_2020/papers/Lydia_Reverse_Variational_Autoencoder_for_Visual_Attribute_Manipulation_and_Anomaly_Detection_WACV_2020_paper.pdf
Bridged variational autoencoders for joint modeling of images and attributes. Yadav, Sarana, Namboodiri, Hegde http://openaccess.thecvf.com/content_WACV_2020/papers/Yadav_Bridged_Variational_Autoencoders_for_Joint_Modeling_of_Images_and_Attributes_WACV_2020_paper.pdf
Treatment effect estimation with disentangled latent factors. anon https://arxiv.org/abs/2001.10652
Unbalanced GANS: pre-training the generator of generative adversarial network using variational autoencoder. Ham, Jun, Kim https://arxiv.org/pdf/2002.02112.pdf
Regularized autoencoders via relaxed injetive probability flow. Kumar, Poole, Murphy https://arxiv.org/abs/2002.08927
Out-of-distribution detection with distance guarantee in deep generative models. Zhang, Liu, Chen, Wang, Liu, Li, Wei, Chen https://arxiv.org/abs/2002.03328
Balancing reconstruction error and Kullback-Leibler divergence in variational autoencoders. Asperti, Trentin https://arxiv.org/pdf/2002.07514.pdf
Data augmentation for historical documents via cascade variational auto-encoder. Cao, Kamata https://ieeexplore.ieee.org/abstract/document/8977737
Controlling generative models with continuous factors of variations. Plumerault, Borgne, Hudelot https://arxiv.org/pdf/2001.10238.pdf
Towards a controllable disentanglement network. Song, Koyejo, Zhang https://arxiv.org/abs/2001.08572
Knowledge-induced learning with adaptive sampling variational autoencoders for open set fault diagnostics. Chao, Adey, Fink https://arxiv.org/abs/1912.12502
NestedVAE: isolating common factors via weak supervision. Vowels, Camgoz, Bowden https://arxiv.org/abs/2002.11576
Leveraging cross feedback of user and item embeddings for variational autoencoder based collaborative filtering. Jin, Zhao, Du, Liu, Gao, Li, Xu https://arxiv.org/pdf/2002.09145.pdf
K-autoencoders deep clustering. Opochinsky, Chazan, Gannot, Goldberger http://www.eng.biu.ac.il/goldbej/files/2020/02/ICASSP_2020_Yaniv.pdf
D2D-TM: a cycle VAE-GAN for multi-domain collaborative filtering. Nguyen, Ishigaki https://ieeexplore.ieee.org/abstract/document/9006461/
Disentangling controllable object through video prediction improves visual reinforcement learning. Zhong, Schwing, Peng https://arxiv.org/pdf/2002.09136.pdf
A deep adversarial variational autoencoder model for dimensionality reduction in single-cell RNA sequencing analysis. Lin, Mukherjee, Kannan https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-3401-5
Context conditional variational autoencoder for predicting multi-path trajectories in mixed traffic. Cheng, Liao, Yang, Sester, Rosenhahn https://arxiv.org/pdf/2002.05966.pdf
Optimizing variational graph autoencoder for community detection with dual optimization. Choong, Liu, Murata
Learning flat latent manifolds with VAEs. Chen, Klushyn, Ferroni, Bayer, van der Smagt https://arxiv.org/pdf/2002.04881.pdf
Learning discrete distributions by dequantization. Hoogeboom, Cohen, Tomczak https://arxiv.org/pdf/2001.11235.pdf
Learning discrete and continuous factors of data via alternating disentanglement. Jeong, Song http://proceedings.mlr.press/v97/jeong19d/jeong19d.pdf https://github.com/snu-mllab/DisentanglementICML19
Electrocardiogram generation and feature extraction using a variational autoencoder. Kuznetsov, Moskalenko, Zolotykh https://arxiv.org/pdf/2002.00254.pdf
CosmoVAE: variational autoencoder for CMB image inpainting. Yi, Guo, Fan, Hamann, Wang https://arxiv.org/pdf/2001.11651.pdf
Unsupervised representation disentanglement using cross domain features and adversarial learning in variational autoencoder based voice conversion. Huang, Luo, Hwang, Lo, Peng, Tsao, Wang https://arxiv.org/pdf/2001.07849.pdf
On implicit regularization in beta VAEs. Kumar, Poole https://arxiv.org/pdf/2002.00041.pdf
Weakly-supervised disentanglement without compromises. Locatello, Poole, Ratsch, Scholkopf, Bachem, Tschannen https://arxiv.org/pdf/2002.02886.pdf
An integrated framework based on latent variational autoencoder for providing early warning of at-risk students. Du, Yang, Hung https://ieeexplore.ieee.org/abstract/document/8952699
Variational autoencoder and friends. Zheng https://www.cs.cmu.edu/~xunzheng/files/vae_single.pdf
High-fidelity synthesis with disentangled representation. Lee, Kim, Hong, Lee https://arxiv.org/pdf/2001.04296.pdf
Neurosymbolic knowledge representation for explainable and trustworthy AI. Malo https://www.preprints.org/manuscript/202001.0163/v1
Adversarial disentanglement with grouped observations. Nemeth https://arxiv.org/pdf/2001.04761.pdf
AE-OT-GAN: Training GANs from data specific latent distribution. An, Guo, Zhang, Qi, Lei, Yau, Gu https://arxiv.org/pdf/2001.03698.pdf
AE-OT: a new generative model based on extended semi-discrete optimal transport. An, Guo, Lei, Luo, Yau, Gu https://openreview.net/pdf?id=HkldyTNYwH
Disentanglement by nonlinear ICA with general incompressible-flow networks (GIN). Sorrenson, Rother, Kothe https://arxiv.org/pdf/2001.04872.pdf
Phase transitions for the information bottleneck in representation learning. Wu, Fischer https://arxiv.org/pdf/2001.01878.pdf
Bayesian deep learning: a model-based interpretable approach. Matsubara https://www.jstage.jst.go.jp/article/nolta/11/1/11_16/_article
SPACE: unsupervised object-oriented scene representation via spatial attention and decomposition. Lin, Wu, Peri, Sun, Singh, Deng, Jiang, Ahn https://openreview.net/forum?id=rkl03ySYDH
A variational stacked autoencoder with harmony search optimizer for valve train fault diagnosis of diesel engine. Chen, Mao, Zhao, Jiang, Zhang https://www.mdpi.com/1424-8220/20/1/223
Evaluating loss compression rates of deep generative models. anon https://openreview.net/forum?id=ryga2CNKDH
Progressive learning and disentanglement of hierarchical representations. anon https://openreview.net/forum?id=SJxpsxrYPS
Learning group structure and disentangled representations of dynamical environments. Quessard, Barrett, Clements https://arxiv.org/abs/2002.06991
A simple framework for contrastive learning of visual representations. Chen, Kornblith, Norouzi, Hinton https://arxiv.org/abs/2002.05709
Out-of-distribution detection in multi-label datasets using latent space of beta VAE Sundar, Ramakrishna, Rahiminasab, Easwaran, Dubey https://arxiv.org/pdf/2003.08740.pdf
Stochastic virtual battery modeling of uncertain electrical loads using variational autoencoder Chakraborty, Nandanoori, Kundu, Kalsi https://arxiv.org/pdf/2003.08461.pdf
A variational autoencoder solution for road traffic forecasting systems: missing data imputation, dimension reduction, model selection and anomaly detection Boquet, Morell, Serrano, Vicario https://www.sciencedirect.com/science/article/pii/S0968090X19309611
Detecting adversarial examples in learning-enabled cyber-physical systems using variational autoencoder for regression Cai, Li, Koutsoukos https://arxiv.org/pdf/2003.10804.pdf
Variational autoencoders with Riemannian brownian motion priors. Kalatzis, Eklund, Arvanitidis, Hauberg https://arxiv.org/abs/2002.05227
Unsupervised representation learning in interactive environements. Racah https://papyrus.bib.umontreal.ca/xmlui/bitstream/handle/1866/23788/Racah_Evan_2019_memoire.pdf?sequence=2
Representing closed transformation paths in encoded network latent space. Connor, Rozell https://arxiv.org/pdf/1912.02644.pdf
Variational diffusion autoencoders with random walk sampling. Li, Lindenbaum, Cheng, Cloninger https://arxiv.org/abs/1905.12724
Diffusion variational autoencoders. Rey, Menkovski, Portegies https://arxiv.org/abs/1901.08991
A wrapped normal distribution on hyperbolic space for gradient-based learning. Nagano, Yamaguchi, Fujita, Koyama https://arxiv.org/abs/1902.02992
Reparameterizing distributions on Lie groups. Falorsi, Haan, Davidson, Forre https://arxiv.org/abs/1903.02958
Prescribed generative adversarial networks. Dieng, Ruiz, Blei, Titsias https://arxiv.org/abs/1910.04302
On the dimensionality of embeddings for sparse features and data Naumov https://arxiv.org/abs/1901.02103
Deep variational autoencoders for breast cancer tissue modeling and synthesis in SFDI Pardo, Lopez-Higuera, Pogue, Conde https://repositorio.unican.es/xmlui/bitstream/handle/10902/18323/DeepVariationalAutoencoders.pdf?sequence=1
Unsupervised anomaly detection of industrial robots using sliding-window convolution variational autoencoder Chen, Liu, Xia, Wang, Lai https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9023488
Discriminator optimal transport Tanaka https://arxiv.org/abs/1910.06832
Fine-tuning generative models Khandelwal https://dspace.mit.edu/bitstream/handle/1721.1/124252/1145123030-MIT.pdf?sequence=1&isAllowed=y
Disentangling and learning robust representations with naturual clustering . Antoran, Miguel https://arxiv.org/abs/1901.09415
Inherent tradeoffs in learning fair representations. Zhao, Gordon https://arxiv.org/abs/1906.08386
Affine variational autoencoders: an efficient approach for improving generalization and robustness to distribution shift. Bidart, Wong https://arxiv.org/pdf/1905.05300.pdf
Learning deep controllable and structured representations for image synthesis, structured prediction and beyond. Yan https://deepblue.lib.umich.edu/handle/2027.42/153334
Continual unsupervised representation learning . Rao, Visin, Rusu, The, Pascanu, Hadsell https://arxiv.org/pdf/1910.14481.pdf
Group-based learning of disentangled representations with generalizability for novel contents. Hosoya https://www.ijcai.org/Proceedings/2019/0348.pdf
Task-Conditioned variational autoencoders for learning movement primitives. Noseworthy, Paul, Roy, Park, Roy https://groups.csail.mit.edu/rrg/papers/noseworthy_corl_19.pdf
Multimodal generative models for compositional representation learning. Wu, Goodman https://arxiv.org/pdf/1912.05075.pdf
dpVAEs: fixing sample generation for regularized VAEs. Bhalodia, Lee, Elhabian https://arxiv.org/pdf/1911.10506.pdf
From variational to deterministic autoencoders. Ghosh, Sajjadi, Vergai, Black, Scholkopf https://arxiv.org/pdf/1903.12436.pdf
Learning representations by maximizing mutual information in variational autoencoder. Rezaabad, Vishwanath https://arxiv.org/pdf/1912.13361.pdf
Disentangled representation learning with Wasserstein total correlation. Xiao, Wang https://arxiv.org/pdf/1912.12818.pdf
Wasserstein dependency measure for representation learning. Ozair, Lynch, Bengio, van den Oord, Levine, Sermanent https://arxiv.org/pdf/1903.11780.pdf
GP-VAE: deep probabilistic time series imputation. Fortuin, Baranchuk, Ratsch, Mandt https://arxiv.org/pdf/1907.04155.pdf https://github.com/ratschlab/GP-VAE
Likelihood contribution based multi-scale architecture for generative flows. Das, Abbeel, Spanos https://arxiv.org/pdf/1908.01686.pdf
Gated Variational Autoencoders: Incorporating weak supervision to encourage disentanglement. Vowels, Camgoz, Bowden https://arxiv.org/pdf/1911.06443.pdf
An introduction to variational autoencoders. Kingma, Welling https://arxiv.org/pdf/1906.02691.pdf
Adaptive density estimation for generative models Lucas, Shmelkov, Schmid, Alahari, Verbeek https://papers.nips.cc/paper/9370-adaptive-density-estimation-for-generative-models.pdf
Data efficient mutual information neural estimator Lin, Sur, Nastase, Divakaran, Hasson, Amer https://arxiv.org/pdf/1905.03319.pdf
RecVAE: a new variational autoencoder for Top-N recommendations with implicit feedback. Shenbin, Alekseev, Tutubalina, Malykh, Nikolenko https://arxiv.org/pdf/1912.11160.pdf
Vibration signal generation using conditional variational autoencoder for class imbalance problem. Ko, Kim, Kong, Lee, Youn http://icmr2019.ksme.or.kr/wp/pdf/190090.pdf
The usual suspects? Reassessing blame for VAE posterior collapse. Dai, Wang, Wipf https://arxiv.org/pdf/1912.10702.pdf
What does the free energy principle tell us about the brain? Gershman https://arxiv.org/pdf/1901.07945.pdf
Sub-band vector quantized variational autoencoder for spectral envelope quantization. Srikotr, Mano https://ieeexplore.ieee.org/abstract/document/8929436
A variational-sequential graph autoencoder for neural performance prediction. Friede, Lukasik, Stuckenschmidt, Keuper https://arxiv.org/pdf/1912.05317.pdf
Explicit disentanglement of appearance and perspective in generative models. Skafte, Hauberg https://papers.nips.cc/paper/8387-explicit-disentanglement-of-appearance-and-perspective-in-generative-models.pdf
Disentangled behavioural representations. Dezfouli, Ashtiani, Ghattas, Nock, Dayan, Ong https://papers.nips.cc/paper/8497-disentangled-behavioural-representations.pdf
Learning disentangled representations for robust person re-identification. Eom, Ham https://papers.nips.cc/paper/8771-learning-disentangled-representation-for-robust-person-re-identification.pdf
Towards latent space optimality for auto-encoder based generative models. Mondal, Chowdhury, Jayendran, Singla, Asnani, AP https://arxiv.org/pdf/1912.04564.pdf
Don't blame the ELBO! A linear VAE perspective on posterior collapse. Lucas, Tucker, Grosse, Norouzi https://128.84.21.199/pdf/1911.02469.pdf
Bridging the ELBO and MMD. Ucar https://arxiv.org/pdf/1910.13181.pdf
Learning disentangled representations for counterfactual regression. Hassanpour, Greiner https://pdfs.semanticscholar.org/1df4/204e14da51b05a14781e2a4dc3e0d7da562d.pdf
Learning disentangled representations for recommendation. Ma, Zhou, Cui, Yang, Zhu https://arxiv.org/pdf/1910.14238.pdf
A vector quantized variational autoencoder (VQ-VAE) autoregressive neural F0 model for statistical parametric speech synthesis. Wang, Takaki, Yamagishi, King, Tokuda https://ieeexplore.ieee.org/abstract/document/8884734
Diversity-aware event prediction based on a conditional variational autoencoder with reconstruction. Kiyomaru, Omura, Murawaki, Kawahara, Kurohashi https://www.aclweb.org/anthology/D19-6014.pdf
Learning multimodal representations with factorized deep generative models. Tsai, Liang, Zadeh, Morency, Salakhutdinov https://pdfs.semanticscholar.org/7416/6384ad391513e8e8bf48cbeaff2516b8c332.pdf
High-dimensional nonlinear profile monitoring based on deep probabilistic autoencoders. Sergin, Yan https://arxiv.org/pdf/1911.00482.pdf
Leveraging directed causal discovery to detect latent common causes. Lee, Hart, Richens, Johri https://arxiv.org/pdf/1910.10174.pdf
Robust discrimination and generation of faces using compact, disentangled embeddings. Browatzki, Wallraven http://openaccess.thecvf.com/content_ICCVW_2019/papers/RSL-CV/Browatzki_Robust_Discrimination_and_Generation_of_Faces_using_Compact_Disentangled_Embeddings_ICCVW_2019_paper.pdf
Coulomb Autoencoders. Sansone, Ali, Sun https://arxiv.org/pdf/1802.03505.pdf
Contrastive learning of structured world models. Kipf, Pol, Welling https://arxiv.org/pdf/1911.12247.pdf
No representation without transformation. Giannone, Masci, Osendorfer https://pgr-workshop.github.io/img/PGR007.pdf
Neural density estimation. Papamakarios https://arxiv.org/pdf/1910.13233.pdf
Variational autoencoder-based approach for rail defect identification. Wei, Ni http://www.dpi-proceedings.com/index.php/shm2019/article/view/32432
Variational learning with disentanglement-pytorch. Abdi, Abolmaesumi, Fels https://openreview.net/pdf?id=rJgUsFYnir
PVAE: learning disentangled representations with intrinsic dimension via approximated L0 regularization. Shi, Glocker, Castro https://openreview.net/pdf?id=HJg8stY2oB
Mixed-curvature variational autoencoders. Skopek, Ganea, Becigneul https://arxiv.org/pdf/1911.08411.pdf
Continuous hierarchical representations with poincare variational autoencoders. Mathieu, Le Lan, Maddison, Tomioka https://arxiv.org/pdf/1901.06033.pdf
VIREL: A variational inference framework for reinforcement learning. Fellows, Mahajan, Rudner, Whiteson https://arxiv.org/pdf/1811.01132.pdf
Disentangling video with independent prediction. Whitney, Fergus https://arxiv.org/pdf/1901.05590.pdf
Disentangling state space representations Miladinovic, Gondal, Scholkopf, Buhmann, Bauer https://arxiv.org/pdf/1906.03255.pdf
Likelihood conribution based multi-scale architecture for generative flows. Das, Abbeel, Spanos https://arxiv.org/pdf/1908.01686.pdf
AlignFlow: cycle consistent learning from multiple domains via normalizing flows Grover, Chute, Shu, Cao, Ermon https://arxiv.org/pdf/1905.12892.pdf
IB-GAN: disentangled representation learning with information bottleneck GAN. Jeon, Lee, Kim https://openreview.net/forum?id=ryljV2A5KX
Learning hierarchical priors in VAEs. Klushyn, Chen, Kurle, Cseke, van der Smagt https://papers.nips.cc/paper/8553-learning-hierarchical-priors-in-vaes.pdf
ODE2VAE: Deep generative second order ODEs with Bayesian neural networks. Yildiz, Heinonen, Lahdesmaki https://papers.nips.cc/paper/9497-ode2vae-deep-generative-second-order-odes-with-bayesian-neural-networks.pdf
Explicitly disentangling image content from translation and rotation with spatial-VAE. Bepler, Zhong, Kelley, Brignole, Berger https://papers.nips.cc/paper/9677-explicitly-disentangling-image-content-from-translation-and-rotation-with-spatial-vae.pdf
A primal-dual link between GANs and autoencoders. Husain, Nock, Williamson https://papers.nips.cc/paper/8333-a-primal-dual-link-between-gans-and-autoencoders.pdf
Exact rate-distortion in autoencoders via echo noise. Brekelmans, Moyer, Galstyan, ver Steeg https://papers.nips.cc/paper/8644-exact-rate-distortion-in-autoencoders-via-echo-noise.pdf
Direct optimization through arg max for discrete variational auto-encoder. Lorberbom, Jaakkola, Gane, Hazan https://papers.nips.cc/paper/8851-direct-optimization-through-arg-max-for-discrete-variational-auto-encoder.pdf
Semi-implicit graph variational auto-encoders. Hasanzadeh, Hajiramezanali, Narayanan, Duffield, Zhou, Qian https://papers.nips.cc/paper/9255-semi-implicit-graph-variational-auto-encoders.pdf
The continuous Bernoulli: fixing a pervasive error in variational autoencoders. Loaiza-Ganem, Cunningham https://papers.nips.cc/paper/9484-the-continuous-bernoulli-fixing-a-pervasive-error-in-variational-autoencoders.pdf
Provable gradient variance guarantees for black-box variational inference. Domke https://papers.nips.cc/paper/8325-provable-gradient-variance-guarantees-for-black-box-variational-inference.pdf
Conditional structure generation through graph variational generative adversarial nets. Yang, Zhuang, Shi, Luu, Li https://papers.nips.cc/paper/8415-conditional-structure-generation-through-graph-variational-generative-adversarial-nets.pdf
Scalable spike source localization in extracellular recordings using amortized variational inference. Hurwitz, Xu, Srivastava, Buccino, Hennig https://papers.nips.cc/paper/8720-scalable-spike-source-localization-in-extracellular-recordings-using-amortized-variational-inference.pdf
A latent variational framework for stochastic optimization. Casgrain https://papers.nips.cc/paper/8802-a-latent-variational-framework-for-stochastic-optimization.pdf
MAVEN: multi-agent variational exploration. Mahajan, Rashid, Samvelyan, Whiteson https://papers.nips.cc/paper/8978-maven-multi-agent-variational-exploration.pdf
Variational graph recurrent neural networks. Hajiramezanali, Hasanzadeh, Narayanan, Duffield, Zhou, Qian https://papers.nips.cc/paper/9254-variational-graph-recurrent-neural-networks.pdf
The thermodynamic variational objective. Masrani, Le, Wood https://papers.nips.cc/paper/9328-the-thermodynamic-variational-objective.pdf
Variational temporal abstraction. Kim, Ahn, Bengio https://papers.nips.cc/paper/9332-variational-temporal-abstraction.pdf
Exploiting video sequences for unsupervised disentangling in generative adversarial networks. Tuesca, Uzal https://arxiv.org/pdf/1910.11104.pdf
Couple-VAE: mitigating the encoder-decoder incompatibility in variational text modeling with coupled deterministic networks. https://openreview.net/pdf?id=SJlo_TVKwS
Variational mixture-of-experts autoencoders for multi-modal deep generative models. Shi, Siddharth, Paige, Torr https://papers.nips.cc/paper/9702-variational-mixture-of-experts-autoencoders-for-multi-modal-deep-generative-models.pdf
Invertible convolutional flow. Karami, Schuurmans, Sohl-Dickstein, Dinh, Duckworth https://papers.nips.cc/paper/8801-invertible-convolutional-flow.pdf
Implicit posterior variational inference for deep Gaussian processes. Yu, Chen, Dai, Low, Jaillet https://papers.nips.cc/paper/9593-implicit-posterior-variational-inference-for-deep-gaussian-processes.pdf
MaCow: Masked convolutional generative flow. Ma, Kong, Zhang, Hovy https://papers.nips.cc/paper/8824-macow-masked-convolutional-generative-flow.pdf
Residual flows for invertible generative modeling. Chen, Behrmann, Duvenaud, Jacobsen https://papers.nips.cc/paper/9183-residual-flows-for-invertible-generative-modeling.pdf
Discrete flows: invertible generative models of discrete data. Tran, Vafa, Agrawal, Dinh, Poole https://papers.nips.cc/paper/9612-discrete-flows-invertible-generative-models-of-discrete-data.pdf
Re-examination of the role of latent variables in sequence modeling. Lai, Dai, Yang, Yoo https://papers.nips.cc/paper/8996-re-examination-of-the-role-of-latent-variables-in-sequence-modeling.pdf
Learning-in-the-loop optimization: end-to-end control and co-design of soft robots through learned deep latent representations. Spielbergs, Zhao, Hu, Du, Matusik, Rus https://papers.nips.cc/paper/9038-learning-in-the-loop-optimization-end-to-end-control-and-co-design-of-soft-robots-through-learned-deep-latent-representations.pdf
Triad constraints for learning causal structure of latent variables. Cai, Xie, Glymour, Hao, Zhang https://papers.nips.cc/paper/9448-triad-constraints-for-learning-causal-structure-of-latent-variables.pdf
Disentangling influence: using disentangled representations to audit model predictions. Marx, Phillips, Friedler, Scheidegger, Venkatasubramanian https://papers.nips.cc/paper/8699-disentangling-influence-using-disentangled-representations-to-audit-model-predictions.pdf
Symmetry-based disentangled representation learning requires interaction with environments. Caselles-Dupre, Ortiz, Filliat https://papers.nips.cc/paper/8709-symmetry-based-disentangled-representation-learning-requires-interaction-with-environments.pdf
Weakly supervised disentanglement with guarantees. Shu, Chen, Kumar, Ermon, Poole https://arxiv.org/pdf/1910.09772.pdf
Demystifying inter-class disentanglement. Gabbay, Hoshen https://arxiv.org/pdf/1906.11796.pdf
Spectral regularization for combating mode collapse in GANs. Liu, Tang, Xie, Qiu https://arxiv.org/pdf/1908.10999.pdf
Geometric disentanglement for generative latent shape models. Aumentado-Armstrong, Tsogkas, Jepson, Dickinson https://arxiv.org/pdf/1908.06386.pdf
Cross-dataset person re-identification via unsupervised pose disentanglement and adaptation. Li, Lin, Lin, Wang https://arxiv.org/pdf/1909.09675.pdf
Identity from here, pose from there: self-supervised disentanglement and generation of objects using unlabeled videos. Xiao, Liu, Lee https://web.cs.ucdavis.edu/~yjlee/projects/iccv2019_disentangle.pdf
Content and style disentanglement for artistic style transfer. Kotovenko, Sanakoyeu, Lang, Ommer https://compvis.github.io/content-style-disentangled-ST/paper.pdf
Unsupervised robust disentangling of latent characteristics for image synthesis. Esser, Haux, Ommer https://arxiv.org/pdf/1910.10223.pdf
LADN: local adversarial disentangling network for facial makeup and de-makeup. Gu, Wang, Chiu, Tai, Tang https://arxiv.org/pdf/1904.11272.pdf
Video compression with rate-distortion autoencoders. Habibian, van Rozendaal, Tomczak, Cohen https://arxiv.org/pdf/1908.05717.pdf
Variable rate deep image compression with a conditional autoencoder. Choi, El-Khamy, Lee https://arxiv.org/pdf/1909.04802.pdf
Memorizing normality to detect anomaly: memory-augmented deep autoencoder for unsupervised anomaly detection. Gong, Liu, Le, Saha https://arxiv.org/pdf/1904.02639.pdf
AVT: unsupervise d learning of transformation equivariant representations by autoencoding variational transformations. Qi, Zhang, Chen, Tian https://arxiv.org/pdf/1903.10863.pdf
Deep clustering by Gaussian mixture variational autoencoders with graph embedding. Yang, Cheung, Li, Fang http://openaccess.thecvf.com/content_ICCV_2019/papers/Yang_Deep_Clustering_by_Gaussian_Mixture_Variational_Autoencoders_With_Graph_Embedding_ICCV_2019_paper.pdf
Variational adversarial active learning. Sinha, Ebrahimi, Darrell https://arxiv.org/pdf/1904.00370.pdf
Variational few-shot learning. Zhang, Zhao, Ni, Xu, Yang http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhang_Variational_Few-Shot_Learning_ICCV_2019_paper.pdf
Multi-angle point cloud-VAE: unsupervised feature learning for 3D point clouds from multiple angles by joint self-reconstruction and half-to-half prediction. Han, Wang, Liu, Zwicker https://arxiv.org/pdf/1907.12704.pdf
LayoutVAE: stochastic scene layout generation from a label set. Jyothi, Durand, He, Sigal, Mori https://arxiv.org/pdf/1907.10719.pdf
VV-NET: Voxel VAE Net with group convolutions for point cloud segmentation. Meng, Gao, Lai, Manocha https://arxiv.org/pdf/1811.04337.pdf
Bayes-Factor-VAE: hierarchical bayesian deep auto-encoder models for factor disentanglement. Kim, Wang, Sahu, Pavlovic https://arxiv.org/pdf/1909.02820.pdf
Robust ordinal VAE: Employing noisy pairwise comparisons for disentanglement. Chen, Batmanghelich https://arxiv.org/pdf/1910.05898.pdf
Evaluating disentangled representations. Sepliarskaia, A. and Kiseleva, J. and de Rijke, M. https://arxiv.org/pdf/1910.05587.pdf
A stable variational autoencoder for text modelling. Li, R. and Li, X. and Lin, C. and Collinson, M. and Mao, R. https://abdn.pure.elsevier.com/en/publications/a-stable-variational-autoencoder-for-text-modelling
Hamiltonian generative networks. Toth, Rezende, Jaegle, Racaniere, Botev, Higgins https://128.84.21.199/pdf/1909.13789.pdf
LAVAE: Disentangling location and appearance. Dittadi, Winther https://arxiv.org/pdf/1909.11813.pdf
Interpretable models in probabilistic machine learning. Kim https://ora.ox.ac.uk/objects/uuid:b238ed7d-7155-4860-960e-6227c7d688fb/download_file?file_format=pdf&safe_filename=PhD_Thesis_of_University_of_Oxford.pdf&type_of_work=Thesis
Disentangling speech and non-speech components for building robust acoustic models from found data. Gurunath, Rallabandi, Black https://arxiv.org/pdf/1909.11727.pdf
Joint separation, dereverberation and classification of multiple sources using multichannel variational autoencoder with auxiliary classifier. Inoue, Kameoka, Li, Makino http://pub.dega-akustik.de/ICA2019/data/articles/000906.pdf
SuperVAE: Superpixelwise variational autoencoder for salient object detection. Li, Sun, Guo https://www.aaai.org/ojs/index.php/AAAI/article/view/4876
Implicit discriminator in variational autoencoder. Munjal, Paul, Krishnan https://arxiv.org/pdf/1909.13062.pdf
TransGaGa: Geometry-aware unsupervised image-to-image translation. Wu, Cao, Li, Qian, Loy http://openaccess.thecvf.com/content_CVPR_2019/papers/Wu_TransGaGa_Geometry-Aware_Unsupervised_Image-To-Image_Translation_CVPR_2019_paper.pdf
Variational attention using articulatory priors for generating code mixed speech using monolingual corpora. Rallabandi, Black. https://www.isca-speech.org/archive/Interspeech_2019/pdfs/1103.pdf
One-class collaborative filtering with the queryable variational autoencoder. Wu, Bouadjenek, Sanner. https://people.eng.unimelb.edu.au/mbouadjenek/papers/SIGIR_Short_2019.pdf
Predictive auxiliary variational autoencoder for representation learning of global speech characteristics. Springenberg, Lakomkin, Weber, Wermter. https://www.isca-speech.org/archive/Interspeech_2019/pdfs/2845.pdf
Data augmentation using variational autoencoder for embedding based speaker verification. Wu, Wang, Qian, Yu https://zhanghaowu.me/assets/VAE_Data_Augmentation_proceeding.pdf
One-shot voice conversion with disentangled representations by leveraging phonetic posteriograms. Mohammadi, Kim. https://www.isca-speech.org/archive/Interspeech_2019/pdfs/1798.pdf
EEG-based adaptive driver-vehicle interface using variational autoencoder and PI-TSVM. Bi, Zhang, Lian https://www.researchgate.net/profile/Luzheng_Bi2/publication/335619300_EEG-Based_Adaptive_Driver-Vehicle_Interface_Using_Variational_Autoencoder_and_PI-TSVM/links/5d70bb234585151ee49e5a30/EEG-Based-Adaptive-Driver-Vehicle-Interface-Using-Variational-Autoencoder-and-PI-TSVM.pdf
Neural gaussian copula for variational autoencoder Wang, Wang https://arxiv.org/pdf/1909.03569.pdf
Enhancing VAEs for collaborative filtering: Flexible priors and gating mechanisms. Kim, Suh http://delivery.acm.org/10.1145/3350000/3347015/p403-kim.pdf?ip=86.162.136.199&id=3347015&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&__acm__=1568726810_89cfa7cbc7c1b0663405d4446f9fce85
Riemannian normalizing flow on variational wasserstein autoencoder for text modeling. Wang, Wang https://arxiv.org/pdf/1904.02399.pdf
Disentanglement with hyperspherical latent spaces using diffusion variational autoencoders. Rey https://openreview.net/pdf?id=SylFDSU6Sr
Learning deep representations by mutual information estimation and maximization. Hjelm, Fedorov, Lavoie-Marchildon, Grewal, Bachman, Trischler, Bengio https://arxiv.org/pdf/1808.06670.pdf https://github.com/rdevon/DIM
Novel tracking approach based on fully-unsupervised disentanglement of the geometrical factors of variation. Vladymyrov, Ariga https://arxiv.org/pdf/1909.04427.pdf
Real time trajectory prediction using conditional generative models. Gomez-Gonzalez, Prokudin, Scholkopf, Peters https://arxiv.org/pdf/1909.03895.pdf
Disentanglement challenge: from regularization to reconstruction. Qiao, Li, Cai https://openreview.net/pdf?id=ByecPrUaHH
Improved disentanglement through aggregated convolutional feature maps. Seitzer https://openreview.net/pdf?id=ryxOvH86SH
Linked variational autoencoders for inferring substitutable and supplementary items. Rakesh, Wang, Shu http://www.public.asu.edu/~skai2/files/wsdm_2019_lvae.pdf
On the fairness of disentangled representations. Locatello, Abbati, Rainforth, Bauer, Scholkopf, Bachem https://arxiv.org/pdf/1905.13662.pdf
Learning robust representations by projecting superficial statistics out. Wang, He, Lipton, Xing https://openreview.net/pdf?id=rJEjjoR9K7
Understanding posterior collapse in generative latent variable models. Lucas, Tucker, Grosse, Norouzi https://openreview.net/pdf?id=r1xaVLUYuE
On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset. Gondal, Wuthrich, Miladinovic, Locatello, Breidt, Volchkv, Akpo, Bachem, Scholkopf, Bauer https://arxiv.org/pdf/1906.03292.pdf https://github.com/rr-learning/disentanglement_dataset
DIVA: domain invariant variational autoencoder. Ilse, Tomczak, Louizos, Welling https://arxiv.org/pdf/1905.10427.pdf https://github.com/AMLab-Amsterdam/DIVA
Comment: Variational Autoencoders as empirical Bayes. Wang, Miller, Blei http://www.stat.columbia.edu/~yixinwang/papers/WangMillerBlei2019.pdf
Fast MVAE: joint separation and classification of mixed sources based on multichannel variational autoencoder with auxiliary classifier. Li, Kameoka, Makino https://ieeexplore.ieee.org/abstract/document/8682623
Reweighted expectation maximization. Dieng, Paisley https://arxiv.org/pdf/1906.05850.pdf https://github.com/adjidieng/REM
Semisupervised text classification by variational autoencoder. Xu, Tan https://ieeexplore.ieee.org/abstract/document/8672806
Learning deep latent-variable MRFs with amortized Bethe free-energy minimization. Wiseman https://openreview.net/pdf?id=ByeMHULt_N
Contrastive variational autoencoder enhances salient features. Abid, Zou https://arxiv.org/pdf/1902.04601.pdf https://github.com/abidlabs/contrastive_vae
Learning latent superstructures in variational autoencoders for deep multidimensional clustering. Li, Chen, Poon, Zhang https://openreview.net/pdf?id=SJgNwi09Km
Tighter variational bounds are not necessarily better. Rainforth, Kosiorek, Le, Maddison, Igl, Wood, The https://arxiv.org/pdf/1802.04537.pdf https://github.com/lxuechen/DReG-PyTorch
ISA-VAE: Independent subspace analysis with variational autoencoders. Anon. https://openreview.net/pdf?id=rJl_NhR9K7
Manifold mixup: better representations by interpolating hidden states. Verma, Lamb, Beckham, Najafi, Mitliagkas, Courville, Lopez-Paz, Bengio. https://arxiv.org/pdf/1806.05236.pdf https://github.com/vikasverma1077/manifold_mixup
Bit-swap: recursive bits-back coding for lossless compression with hierarchical latent variables. Kingma, Abbeel, Ho. http://proceedings.mlr.press/v97/kingma19a/kingma19a.pdf https://github.com/fhkingma/bitswap
Practical lossless compression with latent variables using bits back coding. Townsend, Bird, Barber. https://arxiv.org/pdf/1901.04866.pdf https://github.com/bits-back/bits-back
BIVA: a very deep hierarchy of latent variables for generative modeling. Maaloe, Fraccaro, Lievin, Winther. https://arxiv.org/pdf/1902.02102.pdf
Flow++: improving flow-based generative models with variational dequantization and architecture design. Ho, Chen, Srinivas, Duan, Abbeel. https://arxiv.org/pdf/1902.00275.pdf https://github.com/aravindsrinivas/flowpp
Sylvester normalizing flows for variational inference. van den Berg, Hasenclever, Tomczak, Welling. https://arxiv.org/pdf/1803.05649.pdf https://github.com/riannevdberg/sylvester-flows
Unbiased implicit variational inference. Titsias, Ruiz. https://arxiv.org/pdf/1808.02078.pdf
Robustly disentangled causal mechanisms: validating deep representations for interventional robustness. Suter, Miladinovic, Scholkopf, Bauer. https://arxiv.org/pdf/1811.00007.pdf
Tutorial: Deriving the standard variational autoencoder (VAE) loss function. Odaibo https://arxiv.org/pdf/1907.08956.pdf
Learning disentangled representations with reference-based variational autoencoders. Ruiz, Martinez, Binefa, Verbeek. https://arxiv.org/pdf/1901.08534
Disentangling factors of variation using few labels. Locatello, Tschannen, Bauer, Ratsch, Scholkopf, Bachem https://arxiv.org/pdf/1905.01258.pdf
Disentangling disentanglement in variational autoencoders Mathieu, Rainforth, Siddharth, The, https://arxiv.org/pdf/1812.02833.pdf https://github.com/iffsid/disentangling-disentanglement
LIA: latently invertible autoencoder with adversarial learning Zhu, Zhao, Zhang https://arxiv.org/pdf/1906.08090.pdf
Emerging disentanglement in auto-encoder based unsupervised image content transfer. Press, Galanti, Benaim, Wolf https://openreview.net/pdf?id=BylE1205Fm https://github.com/oripress/ContentDisentanglement
MAE: Mutual posterior-divergence regularization for variational autoencoders Ma, Zhou, Hovy https://arxiv.org/pdf/1901.01498.pdf https://github.com/XuezheMax/mae
Overcoming the disentanglement vs reconstruction trade-off via Jacobian supervision. Lezama https://openreview.net/pdf?id=Hkg4W2AcFm https://github.com/jlezama/disentangling-jacobian https://github.com/jlezama/disentangling-jacobian/tree/master/unsupervised_disentangling
Challenging common assumptions in the unsupervised learning of disentangled representations. Locatello, Bauer, Lucic, Ratsch, Gelly, Scholkopf, Bachem https://arxiv.org/abs/1811.12359 https://github.com/google-research/disentanglement_lib/blob/master/README.md
Variational prototyping encoder: one shot learning with prototypical images. Kim, Oh, Lee, Pan, Kweon http://openaccess.thecvf.com/content_CVPR_2019/papers/Kim_Variational_Prototyping-Encoder_One-Shot_Learning_With_Prototypical_Images_CVPR_2019_paper.pdf
Diagnosing and enchanving VAE models (conf and journal paper both available). Dai, Wipf https://arxiv.org/pdf/1903.05789.pdf https://github.com/daib13/TwoStageVAE
Disentangling latent hands for image synthesis and pose estimation. Yang, Yao http://openaccess.thecvf.com/content_CVPR_2019/papers/Yang_Disentangling_Latent_Hands_for_Image_Synthesis_and_Pose_Estimation_CVPR_2019_paper.pdf
Rare event detection using disentangled representation learning. Hamaguchi, Sakurada, Nakamura http://openaccess.thecvf.com/content_CVPR_2019/papers/Hamaguchi_Rare_Event_Detection_Using_Disentangled_Representation_Learning_CVPR_2019_paper.pdf
Disentangling latent space for VAE by label relevant/irrelvant dimensions. Zheng, Sun https://arxiv.org/pdf/1812.09502.pdf https://github.com/ZhilZheng/Lr-LiVAE
Variational autoencoders pursue PCA directions (by accident). Rolinek, Zietlow, Martius https://arxiv.org/pdf/1812.06775.pdf
Disentangled Representation learning for 3D face shape. Jiang, Wu, Chen, Zhang https://arxiv.org/pdf/1902.09887.pdf https://github.com/zihangJiang/DR-Learning-for-3D-Face
Preventing posterior collapse with delta-VAEs. Razavi, van den Oord, Poole, Vinyals https://arxiv.org/pdf/1901.03416.pdf https://github.com/mattjj/svae
Gait recognition via disentangled representation learning. Zhang, Tran, Yin, Atoum, Liu, Wan, Wang https://arxiv.org/pdf/1904.04925.pdf
Hierarchical disentanglement of discriminative latent features for zero-shot learning. Tong, Wang, Klinkigt, Kobayashi, Nonaka http://openaccess.thecvf.com/content_CVPR_2019/papers/Tong_Hierarchical_Disentanglement_of_Discriminative_Latent_Features_for_Zero-Shot_Learning_CVPR_2019_paper.pdf
Generalized zero- and few-shot learning via aligned variational autoencoders. Schonfeld, Ebrahimi, Sinha, Darrell, Akata https://arxiv.org/pdf/1812.01784.pdf https://github.com/chichilicious/Generalized-Zero-Shot-Learning-via-Aligned-Variational-Autoencoders
Unsupervised part-based disentangling of object shape and appearance. Lorenz, Bereska, Milbich, Ommer https://arxiv.org/pdf/1903.06946.pdf
A semi-supervised Deep generative model for human body analysis. de Bem, Ghosh, Ajanthan, Miksik, Siddaharth, Torr http://www.robots.ox.ac.uk/~tvg/publications/2018/W21P20.pdf
Multi-object representation learning with iterative variational inference. Greff, Kaufman, Kabra, Watters, Burgess, Zoran, Matthey, Botvinick, Lerchner https://arxiv.org/pdf/1903.00450.pdf https://github.com/MichaelKevinKelly/IODINE
Generating diverse high-fidelity images with VQ-VAE-2. Razavi, van den Oord, Vinyals https://arxiv.org/pdf/1906.00446.pdf https://github.com/deepmind/sonnet/blob/master/sonnet/examples/vqvae_example.ipynb https://github.com/rosinality/vq-vae-2-pytorch
MONet: unsupervised scene decomposition and representation. Burgess, Matthey, Watters, Kabra, Higgins, Botvinick, Lerchner https://arxiv.org/pdf/1901.11390.pdf
Structured disentangled representations and Hierarchical disentangled representations. Esmaeili, Wu, Jain, Bozkurt, Siddarth, Paige, Brooks, Dy, van de Meent https://arxiv.org/pdf/1804.02086.pdf
Spatial Broadcast Decoder: A simple architecture for learning disentangled representations in VAEs. Watters, Matthey, Burgess, Lerchner https://arxiv.org/pdf/1901.07017.pdf https://github.com/lukaszbinden/spatial-broadcast-decoder
Resampled priors for variational autoencoders. Bauer, Mnih https://arxiv.org/pdf/1802.06847.pdf
Weakly supervised disentanglement by pairwise similiarities. Chen, Batmanghelich https://arxiv.org/pdf/1906.01044.pdf
Deep variational information bottleneck. Aelmi, Fischer, Dillon, Murphy https://arxiv.org/pdf/1612.00410.pdf https://github.com/alexalemi/vib_demo
Generalized variational inference. Knoblauch, Jewson, Damoulas https://arxiv.org/pdf/1904.02063.pdf
Variational autoencoders and nonlinear ICA: a unifying framework. Khemakhem, Kingma https://arxiv.org/pdf/1907.04809.pdf
Lagging inference networks and posterior collapse in variational autoencoders. He, Spokoyny, Neubig, Berg-Kirkpatrick https://arxiv.org/pdf/1901.05534.pdf https://github.com/jxhe/vae-lagging-encoder
Avoiding latent variable collapse with generative skip models. Dieng, Kim, Rush, Blei https://arxiv.org/pdf/1807.04863.pdf
Distribution Matching in Variational inference. Rosca, Lakshminarayana, Mohamed https://arxiv.org/pdf/1802.06847.pdf A variational auto-encoder model for stochastic point process. Mehrasa, Jyothi, Durand, He, Sigal, Mori https://arxiv.org/pdf/1904.03273.pdf
Sliced-Wasserstein auto-encoders. Kolouri, Pope, Martin, Rohde https://openreview.net/pdf?id=H1xaJn05FQ https://github.com/skolouri/swae
A deep generative model for graph layout. Kwon, Ma https://arxiv.org/pdf/1904.12225.pdf
Differentiable perturb-and-parse semi-supervised parsing with a structured variational autoencoder. Corro, Titov https://openreview.net/pdf?id=BJlgNh0qKQ https://github.com/FilippoC/diffdp
Variational autoencoders with jointly optimized latent dependency structure. He, Gong, Marino, Mori, Lehrmann https://openreview.net/pdf?id=SJgsCjCqt7 https://github.com/ys1998/vae-latent-structure
Unsupervised learning of spatiotemporally coherent metrics Goroshin, Bruna, Tompson, Eigen, LeCun https://arxiv.org/pdf/1412.6056.pdf
Temporal difference variational auto-encoder. Gregor, Papamakarios, Besse, Buesing, Weber https://arxiv.org/pdf/1806.03107.pdf https://github.com/xqding/TD-VAE
Representation learning with contrastive predictive coding. van den Oord, Li, Vinyals https://arxiv.org/pdf/1807.03748.pdf https://github.com/davidtellez/contrastive-predictive-coding
Representation disentanglement for multi-task learning with application to fetal ultrasound Meng, Pawlowski, Rueckert, Kainz https://arxiv.org/pdf/1908.07885.pdf
M$2$VAE - derivation of a multi-modal variational autoencoder objective from the marginal joint log-likelihood. Korthals https://arxiv.org/pdf/1903.07303.pdf
Predicting visual memory schemas with variational autoencoders. Kyle-Davidson, Bors, Evans https://arxiv.org/pdf/1907.08514.pdf
T-CVAE: Transformer -based conditioned variational autoencoder for story completion. Wang, Wan https://www.ijcai.org/proceedings/2019/0727.pdf https://github.com/sodawater/T-CVAE
PuVAE: A variational autoencoder to purify adversarial examples. Hwang, Park, Jang, Yoon, Cho https://arxiv.org/pdf/1903.00585.pdf
Coupled VAE: Improved accuracy and robustness of a variational autoencoder. Cao, Li, Nelson https://arxiv.org/pdf/1906.00536.pdf
D-VAE: A variational autoencoder for directed acyclic graphs. Zhang, Jiang, Cui, Garnett, Chen https://arxiv.org/abs/1904.11088 https://github.com/muhanzhang/D-VAE
Are disentangled representations helpful for abstract reasoning? van Steenkiste, Locatello, Schmidhuber, Bachem https://arxiv.org/pdf/1905.12506.pdf
A heuristic for unsupervised model selection for variational disentangled representation learning. Duan, Watters, Matthey, Burgess, Lerchner, Higgins https://arxiv.org/pdf/1905.12614.pdf
Dual space learning with variational autoencoders. Okamoto, Suzuki, Higuchi, Ohsawa, Matsuo https://pdfs.semanticscholar.org/ea70/6495d4a6214b3d6174bb7fd99c5a9c34c2e6.pdf
Variational autoencoders for sparse and overdispersed discrete data. Zhao, Rai, Du, Buntine https://arxiv.org/pdf/1905.00616.pdf
Variational auto-decoder. Zadeh, Lim, Liang, Morency. https://arxiv.org/pdf/1903.00840.pdf
Causal discovery with attention-based convolutional neural networks. Naura, Bucur, Seifert https://www.mdpi.com/2504-4990/1/1/19/pdf
Variational laplace autoencoders. Park, Kim, Kim http://proceedings.mlr.press/v97/park19a/park19a.pdf
Variational autoencoders with normalizing flow decoders. https://openreview.net/forum?id=r1eh30NFwB
Gaussian process priors for view-aware inference. Hou, Heljakka, Solin https://arxiv.org/pdf/1912.03249.pdf
SGVAE: sequential graph variational autoencoder. Jing, Chi, Tang https://arxiv.org/pdf/1912.07800.pdf
improving multimodal generative models with disentangled latent partitions. Daunhawer, Sutter, Vogt http://bayesiandeeplearning.org/2019/papers/103.pdf
Cross-population variational autoencoders. Davison, Severson, Ghosh https://openreview.net/pdf?id=r1eWdlBFwS http://bayesiandeeplearning.org/2019/papers/96.pdf
Evidential disambiguation of latent multimodality in conditional variational autoencoders. Itkina, Ivanovic, Senanayake, Kochenderfer, Pavone http://bayesiandeeplearning.org/2019/papers/34.pdf
Increasing the generalisation capacity of conditional VAEs. Klushyn, Chen, Cseke, Bayer, van der Smagt https://link.springer.com/chapter/10.1007/978-3-030-30484-3_61
Multi-source neural variational inference. Kurle, Gunnemann, van der Smagt https://wvvw.aaai.org/ojs/index.php/AAAI/article/view/4311
Early integration for movement modeling in latent spaces. Hornung, Chen, van der Smagt https://books.google.co.uk/books?hl=en&lr=&id=M1WfDwAAQBAJ&oi=fnd&pg=PA305&dq=info:MRhvAh4qD7wJ:scholar.google.com&ots=hN84xN5saO&sig=TBMgkFo6z9wrL64TcvzjU4G5gCQ&redir_esc=y#v=onepage&q&f=false
Building face recognition system with triplet-based stacked variational denoising autoencoder. LEe, Hart, Richens, Johri https://dl.acm.org/citation.cfm?id=3369707
Cross-domain variational autoencoder for recommender systems. Shi, Wang https://ieeexplore.ieee.org/abstract/document/8935901 Predictive coding, variational autoencoders, and biological connections. Marino https://openreview.net/pdf?id=SyeumQYUUH
A general and adaptive robust loss function Barron https://arxiv.org/pdf/1701.03077.pdf
Variational autoencoder trajectory primitives and discrete latent. Osa, Ikemoto https://arxiv.org/pdf/1912.04063.pdf
Faster attend-infer-repeat with tractable probabilistic models. Stelzner, Peharz, Kersting http://proceedings.mlr.press/v97/stelzner19a/stelzner19a.pdf https://github/stelzner/supair
Learning predictive models from observation and interaction. Schmeckpeper, Xie, Rybkin, Tian, Daniilidis, Levine, Finn https://arxiv.org/pdf/1912.12773.pdf
Translating visual art into music Muller-Eberstein, van Noord http://openaccess.thecvf.com/content_ICCVW_2019/papers/CVFAD/Muller-Eberstein_Translating_Visual_Art_Into_Music_ICCVW_2019_paper.pdf
Non-parallel voice conversion with controllable speaker individuality using variational autoencoder. Ho, Akagi http://www.apsipa.org/proceedings/2019/pdfs/68.pdf
Derivation of the variational Bayes equations. Maren https://arxiv.org/pdf/1906.08804.pdf
TVAE: Triplet-Based Variational Autoencoder using Metric Learning. Ishfaq, Hoogi, Rubin. https://arxiv.org/abs/1802.04403
Conditional neural processes. Garnelo, Rosenbaum, Maddison, Ramalho, Saxton, Shanahan, The, Rezende, Eslami https://arxiv.org/abs/1807.01613
The variational homoencoder: learning to learn high capacity generative models from few examples. Hewitt, Nye, Gane, Jaakkola, Tenebaum https://arxiv.org/abs/1807.08919
Wasserstein variational inference. Ambrogioni, Guclu, Gucluturk, Hinne, Maris, van Gerven https://arxiv.org/abs/1805.11284
The dreaming variational autoencoder for reinforcement learning environments Andersen, Goodwin, Granmo https://arxiv.org/pdf/1810.01112v1.pdf
DVAE++: Discrete variational autoencoders wth overlapping transformations. Vahdat, Macready, Bian,Khoshaman, Andriyash http://proceedings.mlr.press/v80/vahdat18a/vahdat18a.pdf
FFJORD: free-form continuous dynamics for scalable reversible generative models. Grathwohl, Chen, Bettencourt, Sutskever, Duvenaud https://arxiv.org/pdf/1810.01367.pdf
A general method for amortizing variational filtering. Marino, Cvitkovic, Yue https://arxiv.org/pdf/1811.05090.pdf https://github.com/joelouismarino/amortized-variational-filtering
Handling incomplete heterogeneous data using VAEs. Nazabal, Olmos, Ghahramani, Valera https://arxiv.org/pdf/1807.03653.pdf
Sequential attend, infer, repeat: generative modeling of moving objects. Kosiorek, Kim, Posner, Teh https://arxiv.org/pdf/1806.01794.pdf https://github.com/akosiorek/sqair https://www.youtube.com/watch?v=-IUNQgSLE0c&feature=youtu.be
Doubly reparameterized gradient estimators for monte carlo objectives. Tucker, Lawson, Gu, Maddison https://arxiv.org/pdf/1810.04152.pdf
Interpretable intuitive physics model. Ye, Wang, Davidson, Gupta https://arxiv.org/pdf/1808.10002.pdf https://github.com/tianye95/interpretable-intuitive-physics-model
Normalizing Flows Tutorial, Part 2: Modern Normalizing Flows. Eric Jang https://blog.evjang.com/2018/01/nf2.html
Neural autoregressive flows. Huang, Krueger, Lacoste, Courville https://medium.com/element-ai-research-lab/neural-autoregressive-flows-f164d6b8e462 https://arxiv.org/pdf/1804.00779.pdf https://github.com/CW-Huang/NAF
Gaussian process prior variational autoencoders. Casale, Dalca, Sagletti, Listgarten, Fusi https://papers.nips.cc/paper/8238-gaussian-process-prior-variational-autoencoders.pdf
ACVAE-VC: non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder. Kameoka, Kaneko, Tanaka, Hojo https://arxiv.org/pdf/1808.05092.pdf
Discovering interpretable representations for both deep generative and discriminative models. Adel, Ghahramani, Weller http://mlg.eng.cam.ac.uk/adrian/ICML18-Discovering.pdf
Autoregressive quantile networks for generative modelling . Ostrovski, Dabey, Munos https://arxiv.org/pdf/1806.05575.pdf
Probabilistic video generation using holistic attribute control. He, Lehrmann, Marino, Mori, Sigal https://arxiv.org/pdf/1803.08085.pdf
Bias and generalization in deep generative models: an empirical study. Zhao, Ren, Yuan, Song, Goodman, Ermon https://arxiv.org/pdf/1811.03259.pdf https://ermongroup.github.io/blog/bias-and-generalization-dgm/ https://github.com/ermongroup/BiasAndGeneralization/tree/master/Evaluate
On variational lower bounds of mutual information. Poole, Ozair, van den Oord, Alemi, Tucker http://bayesiandeeplearning.org/2018/papers/136.pdf
GAN - why it is so hard to train generative adversarial networks . Hui https://medium.com/@jonathan_hui/gan-why-it-is-so-hard-to-train-generative-advisory-networks-819a86b3750b
Counterfactuals uncover the modular structure of deep generative models. Besserve, Sun, Scholkopf. https://arxiv.org/pdf/1812.03253.pdf
Learning independent causal mechanisms. Parascandolo, Kilbertus, Rojas-Carulla, Scholkopf https://arxiv.org/pdf/1712.00961.pdf
Emergence of invariance and disentanglement in deep representations. Achille, Soatto https://arxiv.org/pdf/1706.01350.pdf
Variational memory encoder-decoder. Le, Tran, Nguyen, Venkatesh https://arxiv.org/pdf/1807.09950.pdf https://github.com/thaihungle/VMED
Variational autoencoders for collaborative filtering. Liang, Krishnan, Hoffman, Jebara https://arxiv.org/pdf/1802.05814.pdf
Invariant representations without adversarial training. Moyer, Gao, Brekelmans, Steeg, Galstyan http://papers.nips.cc/paper/8122-invariant-representations-without-adversarial-training.pdf https://github.com/dcmoyer/inv-rep
Density estimation: Variational autoencoders. Rui Shu http://ruishu.io/2018/03/14/vae/
TherML: The thermodynamics of machine learning. Alemi, Fishcer https://arxiv.org/pdf/1807.04162.pdf
Leveraging the exact likelihood of deep latent variable models. Mattei, Frellsen https://arxiv.org/pdf/1802.04826.pdf
What is wrong with VAEs? Kosiorek http://akosiorek.github.io/ml/2018/03/14/what_is_wrong_with_vaes.html
Stochastic variational video prediction. Babaeizadeh, Finn, Erhan, Campbell, Levine https://arxiv.org/pdf/1710.11252.pdf https://github.com/alexlee-gk/video_prediction
Variational attention for sequence-to-sequence models. Bahuleyan, Mou, Vechtomova, Poupart https://arxiv.org/pdf/1712.08207.pdf https://github.com/variational-attention/tf-var-attention
FactorVAE Disentangling by factorizing. Kim, Minh https://arxiv.org/pdf/1802.05983.pdf
Disentangling factors of variation with cycle-consistent variational autoencoders. Jha, Anand, Singh, Veeravasarapu https://arxiv.org/pdf/1804.10469.pdf https://github.com/ananyahjha93/cycle-consistent-vae
Isolating sources of disentanglement in VAEs. Chen, Li, Grosse, Duvenaud https://arxiv.org/pdf/1802.04942.pdf
VAE with a VampPrior. Tomczak, Welling https://arxiv.org/pdf/1705.07120.pdf
A Framework for the quantitative evaluation of disentangled representations. Eastwood, Williams https://openreview.net/pdf?id=By-7dz-AZ https://github.com/cianeastwood/qedr
Recent advances in autoencoder based representation learning. Tschannen, Bachem, Lucic https://arxiv.org/pdf/1812.05069.pdf
InfoVAE: Balancing learning and inference in variational autoencoders. Zhao, Song, Ermon https://arxiv.org/pdf/1706.02262.pdf
Understanding disentangling in Beta-VAE. Burgess, Higgins, Pal, Matthey, Watters, Desjardins, Lerchner https://arxiv.org/pdf/1804.03599.pdf
Hidden Talents of the Variational autoencoder. Dai, Wang, Aston, Hua, Wipf https://arxiv.org/pdf/1706.05148.pdf
Variational Inference of disentangled latent concepts from unlabeled observations. Kumar, Sattigeri, Balakrishnan https://arxiv.org/abs/1711.00848
Self-supervised learning of a facial attribute embedding from video. Wiles, Koepke, Zisserman http://www.robots.ox.ac.uk/~vgg/publications/2018/Wiles18a/wiles18a.pdf
Wasserstein auto-encoders. Tolstikhin, Bousquet, Gelly, Scholkopf https://arxiv.org/pdf/1711.01558.pdf
A two-step disentanglement. method Hadad, Wolf, Shahar http://openaccess.thecvf.com/content_cvpr_2018/papers/Hadad_A_Two-Step_Disentanglement_CVPR_2018_paper.pdf https://github.com/naamahadad/A-Two-Step-Disentanglement-Method
Taming VAEs. Rezende, Viola https://arxiv.org/pdf/1810.00597.pdf https://github.com/denproc/Taming-VAEs https://github.com/syncrostone/Taming-VAEs
IntroVAE Introspective variational autoencoders for photographic image synthesis. Huang, Li, He, Sun, Tan https://arxiv.org/pdf/1807.06358.pdf https://github.com/dragen1860/IntroVAE-Pytorch
Information constraints on auto-encoding variational bayes. Lopez, Regier, Jordan, Yosef https://papers.nips.cc/paper/7850-information-constraints-on-auto-encoding-variational-bayes.pdf https://github.com/romain-lopez/HCV
Learning disentangled joint continuous and discrete representations. Dupont https://papers.nips.cc/paper/7351-learning-disentangled-joint-continuous-and-discrete-representations.pdf https://github.com/Schlumberger/joint-vae
Neural discrete representation learning. van den Oord, Vinyals, Kavukcuoglu https://arxiv.org/pdf/1711.00937.pdf https://github.com/1Konny/VQ-VAE https://github.com/ritheshkumar95/pytorch-vqvae
Disentangled sequential autoencoder. Li, Mandt https://arxiv.org/abs/1803.02991 https://github.com/yatindandi/Disentangled-Sequential-Autoencoder
Variational Inference: A review for statisticians. Blei, Kucukelbir, McAuliffe https://arxiv.org/pdf/1601.00670.pdf Advances in Variational Inferece. Zhang, Kjellstrom https://arxiv.org/pdf/1711.05597.pdf
Auto-encoding total correlation explanation. Goa, Brekelmans, Steeg, Galstyan https://arxiv.org/abs/1802.05822 Closest: https://github.com/gregversteeg/CorEx
Fixing a broken ELBO. Alemi, Poole, Fischer, Dillon, Saurous, Murphy https://arxiv.org/pdf/1711.00464.pdf
The information autoencoding family: a lagrangian perspective on latent variable generative models. Zhao, Song, Ermon https://arxiv.org/pdf/1806.06514.pdf https://github.com/ermongroup/lagvae
Debiasing evidence approximations: on importance-weighted autoencoders and jackknife variational inference. Nowozin https://openreview.net/pdf?id=HyZoi-WRb https://github.com/microsoft/jackknife-variational-inference
Unsupervised discrete sentence representation learning for interpretable neural dialog generation. Zhao, Lee, Eskenazi https://vimeo.com/285802293 https://arxiv.org/pdf/1804.08069.pdf https://github.com/snakeztc/NeuralDialog-LAED
Dual swap disentangling. Feng, Wang, Ke, Zeng, Tao, Song https://papers.nips.cc/paper/7830-dual-swap-disentangling.pdf
Multimodal generative models for scalable weakly-supervised learning. Wu, Goodman https://papers.nips.cc/paper/7801-multimodal-generative-models-for-scalable-weakly-supervised-learning.pdf https://github.com/mhw32/multimodal-vae-public https://github.com/panpan2/Multimodal-Variational-Autoencoder
Do deep generative models know what they don't know? Nalisnick, Matsukawa, The, Gorur, Lakshminarayanan https://arxiv.org/pdf/1810.09136.pdf
Glow: generative flow with invertible 1x1 convolutions. Kingma, Dhariwal https://arxiv.org/pdf/1807.03039.pdf https://github.com/openai/glow https://github.com/pytorch/glow
Inference suboptimality in variational autoencoders. Cremer, Li, Duvenaud https://arxiv.org/pdf/1801.03558.pdf https://github.com/chriscremer/Inference-Suboptimality
Adversarial Variational Bayes: unifying variational autoencoders and generative adversarial networks. Mescheder, Mowozin, Geiger https://arxiv.org/pdf/1701.04722.pdf https://github.com/LMescheder/AdversarialVariationalBayes
Semi-amortized variational autoencoders. Kim, Wiseman, Miller, Sontag, Rush https://arxiv.org/pdf/1802.02550.pdf https://github.com/harvardnlp/sa-vae
Spherical Latent Spaces for stable variational autoencoders. Xu, Durrett https://arxiv.org/pdf/1808.10805.pdf https://github.com/jiacheng-xu/vmf_vae_nlp
Hyperspherical variational auto-encoders. Davidson, Falorsi, De Cao, Kipf, Tomczak https://arxiv.org/pdf/1804.00891.pdf https://github.com/nicola-decao/s-vae-tf https://github.com/nicola-decao/s-vae-pytorch
Fader networks: manipulating images by sliding attributes. Lample, Zeghidour, Usunier, Bordes, Denoyer, Ranzato https://arxiv.org/pdf/1706.00409.pdf https://github.com/facebookresearch/FaderNetworks
Training VAEs under structured residuals. Dorta, Vicente, Agapito, Campbell, Prince, Simpson https://arxiv.org/pdf/1804.01050.pdf https://github.com/Garoe/tf_mvg
oi-VAE: output interpretable VAEs for nonlinear group factor analysis. Ainsworth, Foti, Lee, Fox https://arxiv.org/pdf/1802.06765.pdf https://github.com/samuela/oi-vae
infoCatVAE: representation learning with categorical variational autoencoders. Lelarge, Pineau https://arxiv.org/pdf/1806.08240.pdf https://github.com/edouardpineau/infoCatVAE
Iterative Amortized inference. Marino, Yue, Mandt https://arxiv.org/pdf/1807.09356.pdf https://vimeo.com/287766880 https://github.com/joelouismarino/iterative_inference
On unifying Deep Generative Models. Hu, Yang, Salakhutdinov, Xing https://arxiv.org/pdf/1706.00550.pdf
Diverse Image-to-image translation via disentangled representations. Lee, Tseng, Huang, Singh, Yang https://arxiv.org/pdf/1808.00948.pdf https://github.com/HsinYingLee/DRIT
PIONEER networks: progressively growing generative autoencoder. Heljakka, Solin, Kannala https://arxiv.org/pdf/1807.03026.pdf https://github.com/AaltoVision/pioneer
Towards a definition of disentangled representations. Higgins, Amos, Pfau, Racaniere, Matthey, Rezende, Lerchner https://arxiv.org/pdf/1812.02230.pdf
Life-long disentangled representation learning with cross-domain latent homologies. Achille, Eccles, Matthey, Burgess, Watters, Lerchner, Higgins file:///Users/matthewvowels/Downloads/Life-Long_Disentangled_Representation_Learning_wit.pdf
Learning deep disentangled embeddings with F-statistic loss. Ridgeway, Mozer https://arxiv.org/pdf/1802.05312.pdf https://github.com/kridgeway/f-statistic-loss-nips-2018
Learning latent subspaces in variational autoencoders. Klys, Snell, Zemel https://arxiv.org/pdf/1812.06190.pdf
On the latent space of Wasserstein auto-encoders. Rubenstein, Scholkopf, Tolstikhin. https://arxiv.org/pdf/1802.03761.pdf https://github.com/tolstikhin/wae
Learning disentangled representations with Wasserstein auto-encoders. Rubenstein, Scholkopf, Tolstikhin https://openreview.net/pdf?id=Hy79-UJPM
The mutual autoencoder: controlling information in latent code representations. Phuong, Kushman, Nowozin, Tomioka, Welling https://openreview.net/pdf?id=HkbmWqxCZ https://openreview.net/pdf?id=HkbmWqxCZ http://2017.ds3-datascience-polytechnique.fr/wp-content/uploads/2017/08/DS3_posterID_048.pdf
Auxiliary guided autoregressive variational autoencoders. Lucas, Verkbeek https://openreview.net/pdf?id=HkGcX--0- https://github.com/pclucas14/aux-vae
Interventional robustness of deep latent variable models. Suter, Miladinovic, Bauer, Scholkopf https://pdfs.semanticscholar.org/8028/a56d6f9d2179416d86837b447c6310bd371d.pdf?_ga=2.190184363.1450484303.1564569882-397935340.1548854421
Understanding degeneracies and ambiguities in attribute transfer. Szabo, Hu, Portenier, Zwicker, Facaro http://openaccess.thecvf.com/content_ECCV_2018/papers/Attila_Szabo_Understanding_Degeneracies_and_ECCV_2018_paper.pdf DNA-GAN: learning disentangled representations from multi-attribute images. Xiao, Hong, Ma https://arxiv.org/pdf/1711.05415.pdf https://github.com/Prinsphield/DNA-GAN
Normalizing flows. Kosiorek http://akosiorek.github.io/ml/2018/04/03/norm_flows.html
Hamiltonian variational auto-encoder Caterini, Doucet, Sejdinovic https://arxiv.org/pdf/1805.11328.pdf
Causal generative neural networks. Goudet, Kalainathan, Caillou, Guyon, Lopez-Paz, Sebag. https://arxiv.org/pdf/1711.08936.pdf https://github.com/GoudetOlivier/CGNN
Flow-GAN: Combining maximum likelihood and adversarial learning in generative models. Grover, Dhar, Ermon https://arxiv.org/pdf/1705.08868.pdf https://github.com/ermongroup/flow-gan
Linked causal variational autoencoder for inferring paired spillover effects. Rakesh, Guo, Moraffah, Agarwal, Liu https://arxiv.org/pdf/1808.03333.pdf https://github.com/rguo12/CIKM18-LCVA
Unsupervised anomaly detection via variational auto-encoder for seasonal KPIs in web applications. Xu, Chen, Zhao, Li, Bu, Li, Liu, Zhao, Pei, Feng, Chen, Wang, Qiao https://arxiv.org/pdf/1802.03903.pdf
Mutual information neural estimation. Belghazi, Baratin, Rajeswar, Ozair, Bengio, Hjelm. https://arxiv.org/pdf/1801.04062.pdf https://github.com/sungyubkim/MINE-Mutual-Information-Neural-Estimation- https://github.com/mzgubic/MINE
Explorations in homeomorphic variational auto-encoding. Falorsi, de Haan, Davidson, Cao, Weiler, Forre, Cohen. https://arxiv.org/pdf/1807.04689.pdf https://github.com/pimdh/lie-vae
Hierarchical variational memory network for dialogue generation. Chen, Ren, Tang, Zhao, Yin http://delivery.acm.org/10.1145/3190000/3186077/p1653-chen.pdf?ip=86.162.136.199&id=3186077&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&__acm__=1569938843_c07ad21d173fc64a44a22fd6521140cb
World models. Ha, Schmidhuber https://arxiv.org/pdf/1803.10122.pdf
Grammar variational autoencoders. Kusner, Paige, Hernandez-Lobato https://arxiv.org/pdf/1703.01925.pdf
The multi-entity variational autoencoder. Nash, Eslami, Burgess, Higgins, Zoran, Weher, Battaglia https://charlienash.github.io/assets/docs/mevae2017.pdf
Towards a neural statistician. Edwards, Storkey https://arxiv.org/abs/1606.02185
The concrete distribution: a continuous relaxation of discrete random variables. Maddison, Mnih, The https://arxiv.org/pdf/1611.00712.pdf
Categorical reparameterization with Gumbel-Softmax. Jang, Gu, Poole https://arxiv.org/abs/1611.01144
Opening the black box of deep neural networks via information. Schwartz-Ziv, Tishby https://arxiv.org/pdf/1703.00810.pdf https://www.youtube.com/watch?v=gOn8Po_NPe4
Discovering causal signals in images . Lopez-Paz, Nishihara, Chintala, Scholkopf, Bottou https://arxiv.org/pdf/1605.08179.pdf
Autoencoding variational inference for topic models. Srivastava, Sutton https://arxiv.org/pdf/1703.01488.pdf
Hidden Markov model variational autoencoder for acoustic unit discovery. Ebbers, Heymann, Drude, Glarner, Haeb-Umbach, Raj https://www.isca-speech.org/archive/Interspeech_2017/pdfs/1160.PDF
Application of variational autoencoders for aircraft turbomachinery design. Zalger http://cs229.stanford.edu/proj2017/final-reports/5231979.pdf
Semi-supervised learning with variational autoencoders. Keng http://bjlkeng.github.io/posts/semi-supervised-learning-with-variational-autoencoders/
Causal effect inference with deep latent variable models. Louizos, Shalit, Mooij, Sontag, Zemel, Welling https://arxiv.org/pdf/1705.08821.pdf https://github.com/AMLab-Amsterdam/CEVAE
beta-VAE: learning basic visual concepts with a constrained variational framework. Higgins, Matthey, Pal, Burgess, Glorot, Botvinick, Mohamed, Lerchner https://openreview.net/pdf?id=Sy2fzU9gl
Challenges in disentangling independent factors of variation. Szabo, Hu, Portenier, Facaro, Zwicker https://arxiv.org/pdf/1711.02245.pdf https://github.com/ananyahjha93/challenges-in-disentangling
Composing graphical models with neural networks for structured representations and fast inference. Johnson, Duvenaud, Wiltschko, Datta, Adams https://arxiv.org/pdf/1603.06277.pdf
Split-brain autoencoders: unsupervised learning by cross-channel prediction. Zhang, Isola, Efros https://arxiv.org/pdf/1611.09842.pdf
Learning disentangled representations with semi-supervised deep generative models.Siddharth, Paige, van de Meent, Desmaison, Goodman, Kohli, Wood, Torr https://papers.nips.cc/paper/7174-learning-disentangled-representations-with-semi-supervised-deep-generative-models.pdf https://github.com/probtorch/probtorch
Learning hierarchical features from generative models. Zhao, Song, Ermon https://arxiv.org/pdf/1702.08396.pdf https://github.com/ermongroup/Variational-Ladder-Autoencoder
Multi-level variational autoencoder: learning disentangled representations from grouped observations. Bouchacourt, Tomioka, Nowozin https://arxiv.org/pdf/1705.08841.pdf
Neural Face editing with intrinsic image disentangling. Shu, Yumer, Hadap, Sankavalli, Shechtman, Samaras http://openaccess.thecvf.com/content_cvpr_2017/papers/Shu_Neural_Face_Editing_CVPR_2017_paper.pdf https://github.com/zhixinshu/NeuralFaceEditing
Variational Lossy Autoencoder. Chen, Kingma, Salimans, Duan, Dhariwal, Schulman, Sutskever, Abbeel https://arxiv.org/abs/1611.02731 https://github.com/jiamings/tsong.me/blob/master/_posts/reading/2016-11-08-lossy-vae.md
Unsupervised learning of disentangled and interpretable representations from sequential data. Hsu, Zhang, Glass https://papers.nips.cc/paper/6784-unsupervised-learning-of-disentangled-and-interpretable-representations-from-sequential-data.pdf https://github.com/wnhsu/FactorizedHierarchicalVAE https://github.com/wnhsu/ScalableFHVAE
Factorized variational autoencoder for modeling audience reactions to movies. Deng, Navarathna, Carr, Mandt, Yue, Matthews, Mori http://www.yisongyue.com/publications/cvpr2017_fvae.pdf
Learning latent representations for speech generation and transformation. Hsu, Zhang, Glass https://arxiv.org/pdf/1704.04222.pdf https://github.com/wnhsu/SpeechVAE
Unsupervised learning of disentangled representations from video. Denton, Birodkar https://papers.nips.cc/paper/7028-unsupervised-learning-of-disentangled-representations-from-video.pdf https://github.com/ap229997/DRNET
Laplacian pyramid of conditional variational autoencoders. Dorta, Vicente, Agapito, Campbell, Prince, Simpson http://cs.bath.ac.uk/~nc537/papers/cvmp17_LapCVAE.pdf
Neural Photo Editing with Inrospective Adverarial Networks. Brock, Lim, Ritchie, Weston https://arxiv.org/pdf/1609.07093.pdf https://github.com/ajbrock/Neural-Photo-Editor
Discrete Variational Autoencoder. Rolfe https://arxiv.org/pdf/1609.02200.pdf https://github.com/QuadrantAI/dvae
Reinterpreting importance-weighted autoencoders. Cremer, Morris, Duvenaud https://arxiv.org/pdf/1704.02916.pdf https://github.com/FighterLYL/iwae
Density Estimation using realNVP. Dinh, Sohl-Dickstein, Bengio https://arxiv.org/pdf/1605.08803.pdf https://github.com/taesungp/real-nvp https://github.com/chrischute/real-nvp
JADE: Joint autoencoders for disentanglement. Banijamali, Karimi, Wong, Ghosi https://arxiv.org/pdf/1711.09163.pdf Joint Multimodal learning with deep generative models. Suzuki, Nakayama, Matsuo https://openreview.net/pdf?id=BkL7bONFe https://github.com/masa-su/jmvae
Towards a deeper understanding of variational autoencoding models. Zhao, Song, Ermon https://arxiv.org/pdf/1702.08658.pdf https://github.com/ermongroup/Sequential-Variational-Autoencoder
Lagging inference networks and posterior collapse in variational autoencoders. Dilokthanakul, Mediano, Garnelo, Lee, Salimbeni, Arulkumaran, Shanahan https://arxiv.org/pdf/1611.02648.pdf https://github.com/Nat-D/GMVAE https://github.com/psanch21/VAE-GMVAE
On the challenges of learning with inference networks on sparse, high-dimensional data. Krishnan, Liang, Hoffman https://arxiv.org/pdf/1710.06085.pdf https://github.com/rahulk90/vae_sparse
Stick-breaking Variational Autoencoder. https://arxiv.org/pdf/1605.06197.pdf https://github.com/sporsho/hdp-vae
Deep variational canonical correlation analysis. Wang, Yan, Lee, Livescu https://arxiv.org/pdf/1610.03454.pdf https://github.com/edchengg/VCCA_pytorch
Nonparametric variational auto-encoders for hierarchical representation learning. Goyal, Hu, Liang, Wang, Xing https://arxiv.org/pdf/1703.07027.pdf https://github.com/bobchennan/VAE_NBP/blob/master/report.markdown
PixelSNAIL: An improved autoregressive generative model. Chen, Mishra, Rohaninejad, Abbeel https://arxiv.org/pdf/1712.09763.pdf https://github.com/neocxi/pixelsnail-public
Improved Variational Inference with inverse autoregressive flows. Kingma, Salimans, Jozefowicz, Chen, Sutskever, Welling https://arxiv.org/pdf/1606.04934.pdf https://github.com/kefirski/bdir_vae
It takes (only) two: adversarial generator-encoder networks. Ulyanov, Vedaldi, Lempitsky https://arxiv.org/pdf/1704.02304.pdf https://github.com/DmitryUlyanov/AGE
Symmetric Variational Autoencoder and connections to adversarial learning. Chen, Dai, Pu, Li, Su, Carin https://arxiv.org/pdf/1709.01846.pdf
Reconstruction-based disentanglement for pose-invariant face recognition. Peng, Yu, Sohn, Metaxas, Chandraker https://arxiv.org/pdf/1702.03041.pdf https://github.com/zhangjunh/DR-GAN-by-pytorch
Is maximum likelihood useful for representation learning? Huszár https://www.inference.vc/maximum-likelihood-for-representation-learning-2/
Disentangled representation learning GAN for pose-invariant face recognition. Tran, Yin, Liu http://zpascal.net/cvpr2017/Tran_Disentangled_Representation_Learning_CVPR_2017_paper.pdf https://github.com/kayamin/DR-GAN
Improved Variational Autoencoders for text modeling using dilated convolutions. Yang, Hu, Salakhutdinov, Berg-kirkpatrick https://arxiv.org/pdf/1702.08139.pdf
Improving variational auto-encoders using householder flow. Tomczak, Welling https://arxiv.org/pdf/1611.09630.pdf https://github.com/jmtomczak/vae_householder_flow
Sticking the landing: simple, lower-variance gradient estimators for variational inference. Roeder, Wu, Duvenaud. http://proceedings.mlr.press/v97/kingma19a/kingma19a.pdf https://github.com/geoffroeder/iwae
VEEGAN: Reducing mode collapse in GANs using implicit variational learning. Srivastava, Valkov, Russell, Gutmann. https://arxiv.org/pdf/1705.07761.pdf https://github.com/akashgit/VEEGAN
Discovering discrete latent topics with neural variational inference. Miao, Grefenstette, Blunsom https://arxiv.org/pdf/1706.00359.pdf
Variational approaches for auto-encoding generative adversarial networks. Rosca, Lakshminarayana, Warde-Farley, Mohamed https://arxiv.org/pdf/1706.04987.pdf
Variational Autoencoder and extensions. Courville https://ift6266h17.files.wordpress.com/2017/03/vae1.pdf
A neural representation of sketch drawings. Ha, Eck https://arxiv.org/pdf/1704.03477.pdf
One-shot generalization in deep generative models. Rezende, Danihelka, Gregor, Wierstra https://arxiv.org/abs/1603.05106
Attend, infer, repeat: fast scene understanding with generative models. Eslami, Heess, Weber, Tassa, Szepesvari, Kavukcuoglu, Hinton https://arxiv.org/pdf/1603.08575.pdf http://akosiorek.github.io/ml/2017/09/03/implementing-air.html https://github.com/aleju/papers/blob/master/neural-nets/Attend_Infer_Repeat.md
Deep feature consistent variational autoencoder. Hou, Shen, Sun, Qiu https://arxiv.org/pdf/1610.00291.pdf https://github.com/sbavon/Deep-Feature-Consistent-Variational-AutoEncoder-in-Tensorflow
Neural variational inference for text processing. Miao, Yu, Grefenstette, Blunsom. https://arxiv.org/pdf/1511.06038.pdf
Domain-adversarial training of neural networks. Ganin, Ustinova, Ajakan, Germain, Larochelle, Laviolette, Marchand, Lempitsky https://arxiv.org/pdf/1505.07818.pdf
Tutorial on Variational Autoencoders. Doersch https://arxiv.org/pdf/1606.05908.pdf
How to train deep variational autoencoders and probabilistic ladder networks. Sonderby, Raiko, Maaloe, Sonderby, Winther https://orbit.dtu.dk/files/121765928/1602.02282.pdf
ELBO surgery: yet another way to carve up the variational evidence lower bound. Hoffman, Johnson http://approximateinference.org/accepted/HoffmanJohnson2016.pdf
Variational inference with normalizing flows. Rezende, Mohamed https://arxiv.org/pdf/1505.05770.pdf
The Variational Fair Autoencoder. Louizos, Swersky, Li, Welling, Zemel https://arxiv.org/pdf/1511.00830.pdf https://github.com/dendisuhubdy/vfae
Information dropout: learning optimal representations through noisy computations. Achille, Soatto https://arxiv.org/pdf/1611.01353.pdf
Domain separation networks. Bousmalis, Trigeorgis, Silberman, Krishnan, Erhan https://arxiv.org/pdf/1608.06019.pdf https://github.com/fungtion/DSN https://github.com/farnazj/Domain-Separation-Networks
Disentangling factors of variation in deep representations using adversarial training. Mathieu, Zhao, Sprechmann, Ramesh, LeCunn https://arxiv.org/pdf/1611.03383.pdf https://github.com/ananyahjha93/disentangling-factors-of-variation-using-adversarial-training
Variational autoencoder for semi-supervised text classification. Xu, Sun, Deng, Tan https://arxiv.org/pdf/1603.02514.pdf https://github.com/wead-hsu/ssvae related: https://github.com/isohrab/semi-supervised-text-classification
Learning what and where to draw. Reed, Sohn, Zhang, Lee https://arxiv.org/pdf/1610.02454.pdf
Attribute2Image: Conditional image generation from visual attributes. Yan, Yang, Sohn, Lee https://arxiv.org/pdf/1512.00570.pdf
Variational inference with normalizing flows. Rezende, Mohamed https://arxiv.org/pdf/1505.05770.pdf https://github.com/ex4sperans/variational-inference-with-normalizing-flows
Wild Variational Approximations. Li, Liu http://approximateinference.org/2016/accepted/LiLiu2016.pdf
Importance Weighted Autoencoders. Burda, Grosse, Salakhutdinov https://arxiv.org/pdf/1509.00519.pdf https://github.com/yburda/iwae https://github.com/xqding/Importance_Weighted_Autoencoders https://github.com/abdulfatir/IWAE-tensorflow
Stacked What-Where Auto-encoders. Zhao, Mathieu, Goroshin, LeCunn https://arxiv.org/pdf/1506.02351.pdf https://github.com/yselivonchyk/Tensorflow_WhatWhereAutoencoder
Disentangling nonlinear perceptual embeddings with multi-query triplet networks. Veit, Belongie, Karaletsos https://www.researchgate.net/profile/Andreas_Veit/publication/301837223_Disentangling_Nonlinear_Perceptual_Embeddings_With_Multi-Query_Triplet_Networks/links/57e2997308ae040ae3c2f3a3/Disentangling-Nonlinear-Perceptual-Embeddings-With-Multi-Query-Triplet-Networks.pdf
Ladder Variational Autoencoders. Sonderby, Raiko, Maaloe, Sonderby, Winther https://arxiv.org/pdf/1602.02282.pdf Variational autoencoder for deep learning of images, labels and captions. Pu, Gan Henao, Yuan, Li, Stevens, Carin https://papers.nips.cc/paper/6528-variational-autoencoder-for-deep-learning-of-images-labels-and-captions.pdf
Approximate inference for deep latent Gaussian mixtures. Nalisnick, Hertel, Smyth https://pdfs.semanticscholar.org/f6fe/5e8e25994c188ba6a124462e2cc55f2c5a67.pdf https://github.com/enalisnick/mixture_density_VAEs
Auxiliary Deep Generative Models. Maaloe, Sonderby, Sonderby, Winther https://arxiv.org/pdf/1602.05473.pdf https://github.com/larsmaaloee/auxiliary-deep-generative-models
Variational methods for conditional multimodal deep learning. Pandey, Dukkipati https://arxiv.org/pdf/1603.01801.pdf
PixelVAE: a latent variable model for natural images. Gulrajani, Kumar, Ahmed, Taiga, Visin, Vazquez, Courville https://arxiv.org/pdf/1611.05013.pdf https://github.com/igul222/PixelVAE https://github.com/kundan2510/pixelVAE
Adversarial autoencoders. Makhzani, Shlens, Jaitly, Goodfellow, Frey https://arxiv.org/pdf/1511.05644.pdf https://github.com/conan7882/adversarial-autoencoders
A hierarchical latent variable encoder-decoder model for generating dialogues. Serban, Sordoni, Lowe, Charlin, Pineau, Courville, Bengio http://www.cs.toronto.edu/~lcharlin/papers/vhred_aaai17.pdf
Infinite variational autoencoder for semi-supervised learning. Abbasnejad, Dick https://arxiv.org/pdf/1611.07800.pdf
f-GAN: Training generative neural samplers using variational divergence minimization. Nowozin, Cseke https://arxiv.org/pdf/1606.00709.pdf https://github.com/LynnHo/f-GAN-Tensorflow
DISCO Nets: DISsimilarity Coefficient networks Bouchacourt, Kumar, Nowozin https://arxiv.org/pdf/1606.02556.pdf https://github.com/oval-group/DISCONets
Information dropout: learning optimal representations through noisy computations. Achille, Soatto https://arxiv.org/pdf/1611.01353.pdf
Weakly-supervised disentangling with recurrent transformations for 3D view synthesis. Yang, Reed, Yang, Lee https://arxiv.org/pdf/1601.00706.pdf https://github.com/jimeiyang/deepRotator
Autoencoding beyond pixels using a learned similarity metric. Boesen, Larsen, Sonderby, Larochelle, Winther https://arxiv.org/pdf/1512.09300.pdf https://github.com/andersbll/autoencoding_beyond_pixels
Generating images with perceptual similarity metrics based on deep networks Dosovitskiy, Brox. https://arxiv.org/pdf/1602.02644.pdf https://github.com/shijx12/DeepSim
A note on the evaluation of generative models. Theis, van den Oord, Bethge. https://arxiv.org/pdf/1511.01844.pdf
InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. Chen, Duan, Houthooft, Schulman, Sutskever, Abbeel https://arxiv.org/pdf/1606.03657.pdf https://github.com/openai/InfoGAN
Disentangled representations in neural models. Whitney https://arxiv.org/abs/1602.02383
A recurrent latent variable model for sequential data. Chung, Kastner, Dinh, Goel, Courville, Bengio https://arxiv.org/pdf/1506.02216.pdf
Unsupervised learning of 3D structure from images. Rezende, Eslami, Mohamed, Battaglia, Jaderberg, Heess https://arxiv.org/pdf/1607.00662.pdf
A survey of inductive biases for factorial representation-learning. Ridgeway https://arxiv.org/pdf/1612.05299.pdf
Short notes on variational bounds with rescaled terms. Rezende https://danilorezende.com/2016/06/27/short-notes-on-variational-bounds-with-rescaled-terms/
Deep learning and the information bottleneck principle Tishby, Zaslavsky https://arxiv.org/pdf/1503.02406.pdf
Training generative neural networks via Maximum Mean Discrepancy optimization. Dziugaite, Roy, Ghahramani https://arxiv.org/pdf/1505.03906.pdf
NICE: non-linear independent components estimation. Dinh, Krueger, Bengio https://arxiv.org/pdf/1410.8516.pdf
Deep convolutional inverse graphics network. Kulkarni, Whitney, Kohli, Tenenbaum https://arxiv.org/pdf/1503.03167.pdf https://github.com/yselivonchyk/TensorFlow_DCIGN
Learning structured output representation using deep conditional generative models. Sohn, Yan, Lee https://papers.nips.cc/paper/5775-learning-structured-output-representation-using-deep-conditional-generative-models.pdf https://github.com/wsjeon/ConditionalVariationalAutoencoder
Latent variable model with diversity-inducing mutual angular regularization. Xie, Deng, Xing https://arxiv.org/pdf/1512.07336.pdf
DRAW: a recurrent neural network for image generation. Gregor, Danihelka, Graves, Rezende, Wierstra. https://arxiv.org/pdf/1502.04623.pdf https://github.com/ericjang/draw
Variational Inference II. Xing, Zheng, Hu, Deng https://www.cs.cmu.edu/~epxing/Class/10708-15/notes/10708_scribe_lecture13.pdf
Auto-encoding variational Bayes. Kingma, Welling https://arxiv.org/pdf/1312.6114.pdf
Learning to disentangle factors of variation with manifold interaction. Reed, Sohn, Zhang, Lee http://proceedings.mlr.press/v32/reed14.pdf
Semi-supervised learning with deep generative models. Kingma, Rezende, Mohamed, Welling https://papers.nips.cc/paper/5352-semi-supervised-learning-with-deep-generative-models.pdf https://github.com/saemundsson/semisupervised_vae https://github.com/Response777/Semi-supervised-VAE
Stochastic backpropagation and approximate inference in deep generative models. Rezende, Mohamed, Wierstra https://arxiv.org/pdf/1401.4082.pdf https://github.com/ashwindcruz/dgm/tree/master/adgm_mnist
Representation learning: a review and new perspectives. Bengio, Courville, Vincent https://arxiv.org/pdf/1206.5538.pdf
Transforming Auto-encoders. Hinton, Krizhevsky, Wang https://www.cs.toronto.edu/~hinton/absps/transauto6.pdf
Graphical models, exponential families, and variational inference. Wainwright, Jordan et al
Variational learning and bits-back coding: an information-theoretic view to Bayesian learning. Honkela, Valpola https://www.cs.helsinki.fi/u/ahonkela/papers/infview.pdf
The information bottleneck method. Tishby, Pereira, Bialek https://arxiv.org/pdf/physics/0004057.pdf