Found 33 repositories(showing 30)
taesungp
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
WeilunWang
Official Implementation of Instance-wise Hard Negative Example Generation for Contrastive Learning in Unpaired Image-to-Image Translation (ICCV 2021)
cryu854
"Contrastive Learning for Unpaired Image-to-Image Translation" in TensorFlow 2
ganslate-team
Simple and extensible GAN image-to-image translation framework. Supports natural and medical images.
A concise PyTorch implementation of CUT (Contrastive unpaired image-to-image translation)
Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image Translation
HeranYang
The official Tensorflow implementation of the paper "A Unified Hyper-GAN Model for Unpaired Multi-contrast MR Image Translation" in MICCAI 2021.
SSongWang
Mix-Domain Contrastive Learning for Unpaired H\&E-to-IHC Stain Translation (ICIP 2024)
kvn-257
An Official Implementation of the paper "Contrastive-SDE: Guiding Stochastic Differential Equations with Contrastive Learning for Unpaired Image-to-Image Translation"
DeepMIALab
FFPE++: Improving the Quality of Formalin-Fixed Paraffin-Embedded Tissue Imaging via Contrastive Unpaired Image-to-Image Translation
XiudingCai
Constraining Multi-scale Pairwise Features between Encoder and Decoder Using Contrastive Learning for Unpaired Image-to-Image Translation
While the use of Generative Adversarial Networks (GANs) has been a breakthrough in the computer vision industry, there exist multiple styles of GANs that are well-tailored to solve specific problems. Behind the mask, though sounding trivial, points to a critical use case. The situation represents the unsupervised image to image translation by discovering distinctive features from the first set and generating images belonging to the other set by learning distinctions between these two. This technique is more feasible for problems where paired images are not available. Using algorithms like Pix2pix is not viable since paired images are expensive and difficult to obtain. To tackle this problem, CycleGAN, DualGAN, and DiscoGAN provide an insight into which the models can learn the mapping from one image domain to another one with unpaired image data. But even in this case, since the problem is reconstructing human faces by removing their facial masks, which requires non-linear transformations, this is tricky. Moreover, the previously mentioned techniques also alter the background and make changes to unwanted objects as they try to create fake images through generators and discriminators. The goal is to implement an approach that not only detects discriminating factors between two sets of pictures but also generates images without altering the rest of the details and only targets specific areas of the image to change. One other technique that can be employed to address this could be to use Contrast GAN, which selects a part of an image, transforms that based on differentiating factors, and then pastes it back to the original image. However, this created an issue since the face masks used in our case had to be of the exact dimensions and identical, which was not the case. To overcome these challenges, we tried to employ an attention-based technique named AGGAN, Attention-Guided Generative Adversarial Networks, for image translation that does not require additional models/parameters to alter a specific part of the image. The AGGAN comprises two generators and two discriminators, like CycleGAN. Two attention-guided generators in AGGAN have built-in attention modules, which can disentangle the discriminative semantic object and the unwanted part by producing an attention mask and a content mask. The underlying image is fused with these masks to create quality fake images. We also consider additional losses to reduce the variance and make the related images pixel consistent. We think of a more sophisticated network by applying two possible subnets to identify the attention and content masks. To avoid omitting any details, the network employs two attention masks, one for the foreground and one for the background, so that the foreground can be better learned, and the background can be preserved. Also, in this case, the generative content mask is introduced to multiple types of facial masks to identify a broad spectrum of them and effectively remove them and create a more decadent generation space. To obtain high-quality unmasked images, we aim and expect to translate masked images to unmasked ones that can be employed on various faces with different skin colors and expressions.
No description available
No description available
hayashimasa
PyTorch Implementation of the Contrastive Unpaired Image Translation framework
wilbertcaine
Contrastive Unpaired Image-to-Image Translation Using PyTorch
xiangyu-getklothed
No description available
PoKoHA
Contrastive Unpaired Translation.
No description available
Bunny2Bunny
No description available
IsaevaTatyana
No description available
fco-dv
contrastive unpaired translation with ignite
nazarii828
No description available
No description available
This repository implements Contrastive Unpaired Translation (CUT) GAN model on deepfashion-1 dataset from Kaggle in Pytorch
irfanfadhullah
Modified version of Contrastive Unpaired Translation (CUT)
akash301191
Contrast-to-contrast MRI transformation using unpaired image translation
AdventuresInDataScience
Package wrapper for contrastive-unpaired-translation instead of using CLI
Brechard
Unofficial simplified implementation of the Contrastive unpaired translation paper, using TensorFlow and Keras.
SyntaxNomad
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)