Found 776 repositories(showing 30)
timbmg
Variational Autoencoder and Conditional Variational Autoencoder on MNIST in PyTorch
hwalsuklee
Tensorflow implementation of variational auto-encoder for MNIST
Pretrained GANs + VAEs + classifiers for MNIST/CIFAR in pytorch.
lyeoni
No description available
dragon-wang
Implementation of VAE and CVAE using Pytorch on MNIST dataset
snakers4
Comparing FC VAE / FCN VAE / PCA / UMAP on MNIST / FMNIST
dragen1860
Pytorch Implementation of variational auto-encoder for MNIST
bvezilic
PyTorch implementation of Variational Autoencoder (VAE) on MNIST dataset.
gtoubassi
Semi-supervised learning with mnist using variational autoencoders. An unsupervised representation is learned which allows for superior classification results with limited labels.
JeremyCCHsu
Semi-Supervised Learning with Categorical VAE (experimented on MNIST)
praeclarumjj3
VQ-VAE implementation in Pytorch
No description available
ekzhang
Conditional variational autoencoder applied to EMNIST + an interactive demo to explore the latent space.
lisadunlap
VAE-GAN applied to the MNIST Digits
AioChem
关于VAE和CVAE生成手写数字图片的实现
debtanu177
Conditional VAE using CNN on MNIST in PyTorch
williamcfrancis
Pytorch implementation of a Variational Autoencoder (VAE) that learns from the MNIST dataset and generates images of altered handwritten digits.
We present a coupled Variational Auto-Encoder (VAE) method that improves the accuracy and robustness of the probabilistic inferences on represented data. The new method models the dependency between input feature vectors (images) and weighs the outliers with a higher penalty by generalizing the original loss function to the coupled entropy function, using the principles of nonlinear statistical coupling. We evaluate the performance of the coupled VAE model using the MNIST dataset. Compared with the traditional VAE algorithm, the output images generated by the coupled VAE method are clearer and less blurry. The visualization of the input images embedded in 2D latent variable space provides a deeper insight into the structure of new model with coupled loss function: the latent variable has a smaller deviation and the output values are generated by a more compact latent space. We analyze the histograms of probabilities for the input images using the generalized mean metrics, in which increased geometric mean illustrates that the average likelihood of input data is improved. Increases in the -2/3 mean, which is sensitive to outliers, indicates improved robustness. The decisiveness, measured by the arithmetic mean of the likelihoods, is unchanged and -2/3 mean shows that the new model has better robustness.
Whalefishin
A minimal example for training a flow matching model in a pretrained VAE's latent space to generate MNIST digits.
lyeoni
No description available
zhihanyang2022
Minimal VAE, Conditional VAE (CVAE), Gaussian Mixture VAE (GMVAE) and Variational RNN (VRNN) in PyTorch, trained on MNIST.
SSS135
VAE + Quantile Networks for MNIST
piyush01123
Variational autoencoder in Keras on MNIST images
Implement Conditional VAE and train on MNIST by tensorflow 1.3.0.
Dive into the world of Variational Autoencoders (VAEs) with MNIST! 🎨✨ Explore variable latent sizes (2, 4, 16) to see how they affect reconstruction, latent space visualizations, and performance metrics 📊 (MSE, SSIM, PSNR).
ANLGBOY
VAE for Fashion MNIST with PyTorch
sarthak0120
Python code (Keras) to implement a Variational Autoencoder Generative Adversarial Network (Using GAN instead of decoder in VAE). MNIST dataset reconstructed using VAEGAN.
Youngsiii
Example of replacing MLP with KAN in autoencoder(AE) and variational autoencoder(VAE)
dariocazzani
Implementation of CoordConv (Convolution and Deconvolution) for a Variational Autoencoder applied to MNIST
owenliang
CodeBook,VQ-VAE,MNIST