#Self-Supervised Learning

Logo of awesome-self-supervised-learning
awesome-self-supervised-learning
Explore a curated compilation of self-supervised learning resources, offering theoretical insights and practical applications in fields such as computer vision, robotics, and natural language processing. Drawing inspiration from influential machine learning projects, this collection highlights self-supervised learning as an emerging trend. It includes critical papers, benchmark codes, and detailed surveys, making it an indispensable resource for researchers and practitioners interested in self-supervised methods. Contributions are encouraged through pull requests to broaden the repository's content and maintain its relevance.
Logo of awesome-graph-self-supervised-learning
awesome-graph-self-supervised-learning
Explore this curated collection of self-supervised graph representation learning techniques, categorized into contrastive, generative, and predictive learning. This resource provides an in-depth overview of methodologies and applications, focusing on strategies like pre-training, fine-tuning, joint learning, and unsupervised representation learning tailored for graph data. Ideal for AI researchers and practitioners, it supports exploration of advanced graph neural networks and their role in AI advancements without overstatement.
Logo of annotated_research_papers
annotated_research_papers
Delve into an extensive library of annotated research papers meant for easier comprehension, primarily aimed at machine learning professionals. This effort strives to demystify complex research through concise annotations and insightful analysis. Featuring a curated collection of significant papers across fields like Computer Vision, NLP, and Diffusion Models, this resource supports professionals in staying current with industry advancements and enriching their learning journey.
Logo of Awesome-Remote-Sensing-Foundation-Models
Awesome-Remote-Sensing-Foundation-Models
This repository delivers a detailed set of resources including papers, datasets, benchmarks, code, and pre-trained weights dedicated to Remote Sensing Foundation Models (RSFMs). It systematically categorizes models into types like vision, vision-language, and generative, offering valuable developments such as PANGAEA, TEOChat, and SAR-JEPA. Designed for a neutral exploration, it aids in navigating through model types and associated projects, maintaining up-to-date information on significant research progress in journals like ICCV and NeurIPS. This collection serves professionals seeking an enhanced understanding of RSFMs through focuses on geographical knowledge, self-supervised learning, and multimodal fusion.
Logo of vissl
vissl
This library supports advanced self-supervised learning in computer vision using PyTorch. It offers reproducible code, comprehensive benchmarks, and a modular design, providing scalable solutions for research. Featuring models like SwAV, SimCLR, and MoCo(v2), and supporting large-scale training, VISSL helps evaluate and innovate in learning representations effectively.
Logo of awesome-contrastive-self-supervised-learning
awesome-contrastive-self-supervised-learning
This collection provides a wide range of papers on contrastive self-supervised learning, useful for scholars and industry professionals. Regular updates ensure coverage of various topics such as topic modeling, vision-language representation, 3D medical image analysis, and multimodal sentiment analysis. Each paper entry includes links to the paper and code, if available, facilitating access to cutting-edge methods and experimental setups. Well-suited for those aiming to enhance their understanding of recent progress in contrastive learning, this collection serves as an essential reference for its comprehensive scope and pertinence.
Logo of S3Gaussian
S3Gaussian
Discover S3Gaussian's approach using 3D Gaussians for self-supervised street scene modeling in autonomous driving, bypassing traditional 3D bounding boxes. It features an innovative hexplane-based encoder and a multi-head Gaussian decoder for quality scene rendering. Compatible with Ubuntu, Python, and PyTorch, this open-source initiative offers extensive tools for training and visualization, highlighting its advancements in modeling dynamic environments without extra supervision.
Logo of Awesome-Denoise
Awesome-Denoise
Awesome-Denoise explores advanced denoising techniques across diverse noise models such as AWGN, PG, and GAN, targeting RGB, raw, and hybrid color spaces. It emphasizes practical applications in single, burst, and video contexts, and features benchmark datasets like SIDD and RENOIR. This project is an invaluable resource for researchers and developers aiming to enhance image quality via self-supervised learning and ISP applications.