contrastive-unpaired-translation
Discover efficient methods in unpaired image-to-image translation with patchwise contrastive learning. The approach avoids complex loss functions and inverse networks, leading to faster, resource-efficient training compared to CycleGAN. It can be applied to single image training with high-quality results, suitable for a variety of applications. Key benefits include memory efficiency and improved distribution matching, developed collaboratively by UC Berkeley and Adobe Research, and presented at ECCV 2020.