#Contrastive Learning

Logo of awesome-self-supervised-learning
awesome-self-supervised-learning
Explore a curated compilation of self-supervised learning resources, offering theoretical insights and practical applications in fields such as computer vision, robotics, and natural language processing. Drawing inspiration from influential machine learning projects, this collection highlights self-supervised learning as an emerging trend. It includes critical papers, benchmark codes, and detailed surveys, making it an indispensable resource for researchers and practitioners interested in self-supervised methods. Contributions are encouraged through pull requests to broaden the repository's content and maintain its relevance.
Logo of Awesome-MIM
Awesome-MIM
This project delivers an extensive review of Masked Image Modeling (MIM) and associated techniques in self-supervised representation learning, presenting them in their historical sequence of development. It covers essential topics such as MIM for Transformers, contrastive learning, and applications in various modalities. The analysis includes the progression of self-supervised learning across diverse modalities, underscoring its pivotal role since 2018 in areas like NLP and Computer Vision. Contributions and revisions from the community are welcomed, along with resources such as curated paper lists and formats for academic citations. This is an essential resource for researchers and enthusiasts exploring the developments and practical applications in MIM.
Logo of awesome-graph-self-supervised-learning
awesome-graph-self-supervised-learning
Explore this curated collection of self-supervised graph representation learning techniques, categorized into contrastive, generative, and predictive learning. This resource provides an in-depth overview of methodologies and applications, focusing on strategies like pre-training, fine-tuning, joint learning, and unsupervised representation learning tailored for graph data. Ideal for AI researchers and practitioners, it supports exploration of advanced graph neural networks and their role in AI advancements without overstatement.
Logo of contrastive-unpaired-translation
contrastive-unpaired-translation
Discover efficient methods in unpaired image-to-image translation with patchwise contrastive learning. The approach avoids complex loss functions and inverse networks, leading to faster, resource-efficient training compared to CycleGAN. It can be applied to single image training with high-quality results, suitable for a variety of applications. Key benefits include memory efficiency and improved distribution matching, developed collaboratively by UC Berkeley and Adobe Research, and presented at ECCV 2020.
Logo of SupContrast
SupContrast
This reference implementation explores supervised contrastive learning using SupConLoss, as outlined in influential arXiv papers. It compares SupContrast with cross-entropy and SimCLR methods, showing improved accuracy on CIFAR-10, CIFAR-100, and ImageNet datasets. The project employs PyTorch with CIFAR for demonstrations. A simplified SupConLoss implementation aids understanding and application, allowing for mode flexibility between supervised and unsupervised learning. Using the ResNet50 architecture, SupContrast achieves high top-1 accuracy through slight adjustments with the momentum encoder. Detailed commands and guidance facilitate easy model deployment for integrating contrastive learning.
Logo of awesome-contrastive-self-supervised-learning
awesome-contrastive-self-supervised-learning
This collection provides a wide range of papers on contrastive self-supervised learning, useful for scholars and industry professionals. Regular updates ensure coverage of various topics such as topic modeling, vision-language representation, 3D medical image analysis, and multimodal sentiment analysis. Each paper entry includes links to the paper and code, if available, facilitating access to cutting-edge methods and experimental setups. Well-suited for those aiming to enhance their understanding of recent progress in contrastive learning, this collection serves as an essential reference for its comprehensive scope and pertinence.
Logo of awesome-self-supervised-gnn
awesome-self-supervised-gnn
Discover a curated compilation of research papers on self-supervised learning in graph neural networks (GNNs), organized by year. This resource includes widely-cited studies and offers access to related code for further exploration. Stay informed on recent developments, suggest additions, or report errors. Topics covered include community detection, representation learning, and anomaly detection, with insights into pioneering techniques and applications.
Logo of OpenAI-CLIP
OpenAI-CLIP
This guide covers a comprehensive tutorial on implementing the CLIP model in PyTorch, demonstrating its ability to link textual input with relevant image retrieval. By integrating established research examples and benchmark results, it explains the core principles of Contrastive Language-Image Pre-training, surpassing conventional classifiers such as those optimized for ImageNet. The content includes essential processes like encoding and projecting multimodal data, detailing the CLIP model's architecture and loss calculation while highlighting its applicability in advanced research and practical applications.
Logo of ReCon
ReCon
The project delves into 3D representation learning by combining contrastive with generative pretraining methods, reaching top-tier outcomes in classification and zero-shot tasks. Featuring a special encoder-decoder framework, it avoids overfitting and ensures excellent data scaling across 3D datasets such as ScanObjectNN and ModelNet40. By focusing on ensemble distillation and cross-modal attention, it effectively balances the complexities of contrastive and generative techniques, leading to enhanced 3D training and recognition capabilities.
Logo of similarity
similarity
TensorFlow Similarity provides advanced algorithms for metric learning, covering self-supervised, similarity, and contrastive learning methods. It supports the process of training and evaluating models through integrated tools like losses, metrics, and samplers. This library is engineered for simplicity in use, backed by distributed training capabilities and multi-modal embedding support. Users can leverage examples available on Google Colab and thorough documentation to efficiently implement and refine models for image clustering and retrieval.
Logo of fashion-clip
fashion-clip
FashionCLIP uses contrastive learning to improve image-text model performance for fashion. Fine-tuned with over 700K data pairs, it excels in capturing fashion specifics. FashionCLIP 2.0 boosts performance further with updated checkpoints, aiding in tasks like retrieval and parsing. Available on HuggingFace, it supports scalable, sustainable applications with low environmental impact.