#Segmentation

Logo of punica
punica
Discover Punica, a novel solution for serving multiple LoRA finetuned models with only 1% additional memory overhead, utilizing a special CUDA kernel for efficient computation. Achieve up to 12x throughput boosts compared to leading systems using segmentation gathering techniques. Punica is available via binaries or source code to match your configuration needs, with comprehensive examples and benchmarks provided.
Logo of toon3d
toon3d
Toon3D facilitates 3D scene reconstruction from 2D cartoons, overcoming geometric inconsistencies. Through a precise image processing pipeline, it achieves visually coherent structures, paving new pathways in animation and virtual reality. Steps include setting environments, dataset management, and depth-based data processing, leading to potential advancements in computational art.
Logo of cotta
cotta
This resource delves into continual test-time domain adaptation by comparing methods such as AdaBN, BN Adapt, and TENT. It covers tasks from CIFAR10/100 to ImageNet and Cityscapes to ACDC, aimed at optimizing classification and segmentation models. Learn about experimental setups including environment preparation via Conda, and experiment scripts tested on systems like RTX2080TI and RTX3090. Additional links provide access to supplementary materials for a thorough understanding of implementation and methodology.
Logo of LISA
LISA
LISA employs a large language model to enhance segmentation tasks, particularly in reasoning segmentation through complex and implicit queries. It uses a detailed benchmark of image-instruction pairs, encompassing extensive world knowledge to provide detailed answers and supports multi-turn dialogue. Demonstrating strong zero-shot learning ability, LISA performs well on datasets without reasoning data, and fine-tuning even with limited data boosts its performance. LISA achieved notable recognition at CVPR 2024. Discover LISA's efficiency through our online demo.
Logo of visionscript
visionscript
VisionScript is an abstract Python-based language designed for easy computer vision tasks like object detection, classification, and segmentation. It features a concise syntax allowing rapid implementation in just a few lines of code. Suitable for newcomers, it supports REPL and interactive notebooks, integrating models such as CLIP and YOLOv8. With straightforward installation, VisionScript empowers developers to quickly engage in computer vision projects, featuring lexical inference for improved workflow efficiency.
Logo of neurite
neurite
Discover Neurite, a neural network toolbox for medical image analysis using TensorFlow and Keras. It includes network layers, N-D interpolation utilities, and model stacking strategies. Neurite supports segmentation and analysis with models like UNet and provides generators and metrics for performance evaluation. Easily install by cloning or with pip, and contribute under guided coding standards. Neurite's tools are featured in projects like VoxelMorph and brainstorm.
Logo of DLTA-AI
DLTA-AI
DLTA-AI facilitates advanced data annotation and object tracking with seamless integration of leading Computer Vision models, including Meta's Segment Anything. It supports comprehensive model selection and provides robust editing tools, allowing for precise video and image annotations. Export options include common formats like COCO and MOT, and custom formats for project-based flexibility. Ideal for applications seeking efficient AI-assisted data labeling and processing.
Logo of Segment-Everything-Everywhere-All-At-Once
Segment-Everything-Everywhere-All-At-Once
A comprehensive approach to image segmentation leveraging multi-modal prompts, known for its versatility and interactive features. It supports diverse prompt types, such as visual and textual cues, permitting customizable combination for enhanced user experience. Capable of managing complex scenarios with its compositional ability and maintaining session history for streamlined interaction. Recent updates showcase its integration into projects like LLaVA-Interactive and Set-of-Mark Prompting, underscoring its versatility and potential in image-editing contexts.
Logo of Grounded-SAM-2
Grounded-SAM-2
Grounded SAM 2 utilizes Grounding DINO and Florence-2 to efficiently ground, segment, and track objects in video content. It offers features such as SAHI, SAM 2.1 compatibility, and custom video input support, aiming to simplify open-set detection and video tracking tasks.