fashion-clip
FashionCLIP uses contrastive learning to improve image-text model performance for fashion. Fine-tuned with over 700K data pairs, it excels in capturing fashion specifics. FashionCLIP 2.0 boosts performance further with updated checkpoints, aiding in tasks like retrieval and parsing. Available on HuggingFace, it supports scalable, sustainable applications with low environmental impact.