#Transformer models

Logo of ctransformers
ctransformers
Discover unified Python bindings for Transformer models implemented with GGML in C/C++. Compatible with models like GPT-2 and LLaMA, this package supports GPU layers and integrates with Hugging Face and LangChain. Available through PyPI for easy installation, it also features experimental attributes like GPTQ and streaming, complemented by extensive documentation.
Logo of sage
sage
SAGE offers cutting-edge spelling correction with sophisticated Transformer models for multiple languages. It simulates human error patterns through statistical and rule-based spelling corruption methods. The tool supports model testing and evaluation on benchmark datasets, establishing SAGE as a versatile resource for improving text accuracy. Recent advancements include a paper acceptance at EACL 2024, highlighting its effective methodologies and ongoing advancements in spelling correction technology.
Logo of marlin
marlin
Marlin, an FP16xINT4 optimized kernel, accelerates LLM inference with batch sizes of 16-32 tokens using advanced GPU techniques. It outperforms comparable kernels under various GPU conditions and is easily integrated with CUDA and torch. Key features include asynchronous global weight management and efficient resource allocation.
Logo of scenic
scenic
Scenic offers a robust framework for creating attention-based computer vision models, supporting tasks like classification and segmentation across multiple modalities. Utilizing JAX and Flax, it simplifies large-scale training through efficient pipelines and established baselines, ideal for research. Explore projects with state-of-the-art models like ViViT. Scenic provides adaptable solutions for both newcomers and experts, facilitating easy integration into existing workflows.