Dress Code Dataset: A Virtual Try-On Innovation
The Dress Code Dataset is a pioneering resource designed to enhance virtual try-on experiences. Developed by a team of researchers - Davide Morelli, Matteo Fincato, Marcella Cornia, Federico Landi, Fabio Cesari, and Rita Cucchiara - this dataset supports high-resolution, multi-category virtual try-ons, contributing to advancements in the realm of fashion technology.
Overview
The dataset was meticulously curated to provide a comprehensive selection of clothing images sourced from the YOOX NET-A-PORTER catalogs. Consisting of over 50,000 image pairs, the dataset spans three main categories: dresses, upper-body clothing, and lower-body clothing. Each image boasts a resolution of 1024 x 768, ensuring high quality and detailed visuals for effective virtual try-on applications.
Key Features
-
Extensive Collection: The dataset comprises 53,792 garments and 107,584 images segmented into three distinct categories: upper body, lower body, and dresses.
-
Rich Information: Beyond the model and garment image pairs, the dataset includes additional data like keypoints, skeletons, human label maps, and dense poses to enhance the understanding and manipulation of garment fitting on virtual models.
Additional Information
-
Keypoints: Utilizing OpenPose technology, the dataset features joint coordinates from human poses, detailing 18 keypoints per human body. This data aids in precisely aligning garments with body movements.
-
Skeletons: These are RGB images that visually map connections between keypoints, providing a structured outline of the pose.
-
Human Label Map: A sophisticated human parser assigns each pixel in an image to specific categories, producing a segmentation mask for each target model based on 18 distinct classes encompassing areas such as upper clothes, pants, and accessories like hats and bags.
-
Human Dense Pose: Dense label and UV mapping information extracted using DensePose technology helps in detailed pose estimation, facilitating more realistic garment fitting.
Experimental Findings
In testing on a low resolution of 256 x 192, various models were evaluated using metrics such as Structural Similarity Index (SSIM), Fréchet Inception Distance (FID), and Kernel Inception Distance (KID). The Dress Code model outperformed rivals with a SSIM of 0.906, FID of 11.40, and KID of 0.570, demonstrating its superior capability in virtual try-on scenarios.
Important Considerations
Access to the Dress Code Dataset is governed by strict terms and conditions. It is not available for private companies, and requests must be made via a formal process using institutional emails. A signed release agreement is mandatory for access. The project team encourages proper citation of their work and offers templates for acknowledgment in academic publications.
Conclusion
The Dress Code Dataset is a powerful tool in the field of virtual fashion, enabling detailed and realistic garment try-ons. By providing high-resolution images and detailed pose data, it paves the way for innovative applications in online fashion retail and virtual fitting rooms. The efforts by the research team offer a valuable contribution to advancing technology in the fashion industry, setting new standards for virtual garment visualization.
For inquiries or further details, interested parties are encouraged to visit the official GitHub page or contact the researchers via email.