#transformers

Logo of spaCy
spaCy
Explore spaCy's robust NLP platform supporting over 70 languages using state-of-the-art neural networks. Access pretrained pipelines for essential tasks like tokenization, named entity recognition, and text classification. Leverage multi-task learning with BERT transformers, ensuring easy deployment and production-readiness. Enhance projects with custom models in frameworks like PyTorch or TensorFlow, and utilize powerful visualizers for syntax and NER. This open-source software, under the MIT license, offers high accuracy and extensibility for all your NLP needs.
Logo of LongLM
LongLM
Explore Self-Extend for efficient LLM context window expansion without additional tuning. This project leverages intrinsic language model strengths through bi-level attention. Recent updates include LLama-3 support and ICML 2024 presentation. Suitable for researchers and developers targeting long-sequence model efficiency.
Logo of transformers
transformers
Participate in this free and open-source course exploring transformer architecture, featuring hands-on exercises, paper reviews, and Jupyter notebooks. Ideal for those interested in encoder-decoder models, self-attention mechanisms, and practical implementations like BERT and GPT-2. Engage collaboratively via GitHub and anticipate upcoming educational videos.
Logo of BERTweet
BERTweet
BERTweet is a pioneering language model pre-trained on a large scale for English Tweets using the RoBERTa method. It utilizes a comprehensive dataset of 850 million tweets, including 5 million related to COVID-19, to enhance performance in NLP tasks. The model can be used with frameworks like `transformers` and `fairseq`, offering pre-trained models such as `bertweet-base` and `bertweet-large`, suitable for deep learning applications. It features effective tweet normalization, facilitating refined text analysis and predictions, supporting research and practical usage.
Logo of chat_templates
chat_templates
This repository contains a variety of chat templates designed for instruction-tuned large language models (LLMs), supporting HuggingFace's Transformer library. It includes templates for the latest models like Meta's Llama-3.1 and Google's Gemma-2. These templates can be integrated into applications for enhanced interaction and response generation. Detailed examples and configurations make this resource useful for developers focusing on conversational AI. Contributions to add more templates are encouraged.
Logo of AutoGPTQ
AutoGPTQ
Discover the advanced weight-only quantization package based on the GPTQ algorithm, featuring user-friendly APIs to enhance LLM efficiency. Recent updates highlight integration with renowned AI libraries like 🤗 Transformers, and improved processing capabilities through the Marlin int4*fp16 kernel. AutoGPTQ facilitates a variety of quantization and inference models, optimizing metrics such as inference speed and model perplexity, and is available for Linux and Windows platforms, supporting various GPUs including CUDA and ROCm systems. Ideal for developers aiming to optimize AI model deployment and manage computational costs while ensuring model accuracy.
Logo of spacy-transformers
spacy-transformers
This package integrates Hugging Face transformers like BERT, GPT-2, and XLNet into spaCy, providing a seamless blend into NLP workflows. Designed for spaCy v3, it features multi-task learning, automated token alignment, and customization options for transformer outputs. Installation is user-friendly via pip, compatible with both CPU and GPU. Though direct task-specific heads are unsupported, prediction outputs for text classification are accessible through wrappers.
Logo of HALOs
HALOs
A comprehensive overview of Human-Aware Loss Functions (HALOs) for aligning large-scale language models like Llama and Archangel using offline human feedback. Highlights include modular data loading, specialized trainer subclasses, and sophisticated evaluation techniques, offering scalable solutions for advanced AI alignment.
Logo of awesome-huggingface
awesome-huggingface
This curated list presents open-source projects and applications integrating Hugging Face libraries to improve NLP capabilities, offering tools, tutorials, and resources from various contributors. Categories include NLP toolkits, text representation, inference engines, model scalability, and others, serving as useful resources for developers and researchers engaging with Hugging Face’s ecosystem for AI development.
Logo of zero_nlp
zero_nlp
The project delivers a versatile framework for Chinese NLP tasks, utilizing PyTorch and Transformers. It includes comprehensive training and fine-tuning solutions for a variety of models such as text-to-vector and multimodal. Abundant open-source training data ensures easy setup with advanced processing methods suitable for large datasets. Models supported include GPT2, CLIP, and GPT-NeoX among others, offering multi-GPU training and deployment capabilities. Discover tutorials for model modification and explore a wide range of pretrained and custom models for diverse NLP needs.
Logo of attention_sinks
attention_sinks
Discover how attention_sinks enhances large language models to sustain fluent text generation with consistent VRAM usage. This method excels in applications requiring endless text generation without model retraining.
Logo of chatglm_finetuning
chatglm_finetuning
This project enhances ChatGLM models by offering diverse tuning options with integrations for PyTorch Lightning, ColossalAI, and Transformer trainers. It includes guidance for LoRA and other fine-tuning methods, installation instructions, data scripts, and continual updates for improved model application.
Logo of XPhoneBERT
XPhoneBERT
XPhoneBERT, a multilingual phoneme model, optimizes text-to-speech (TTS) technology by refining phoneme representations. With its BERT-base architecture trained on 330 million phoneme-level sentences from about 100 languages, it enhances TTS systems' naturalness and prosody, even with limited training data. Seamlessly integrating with Python's 'transformers' package and 'text2phonemesequence' for phoneme conversion, XPhoneBERT supports efficient multilingual pre-training.
Logo of DeepPavlov
DeepPavlov
DeepPavlov 1.0 is an intuitive open-source NLP framework using PyTorch and transformers, designed to provide seamless model deployment for practitioners with little NLP background. It supports diverse applications with pre-trained models, simple installation, and various interfaces across platforms.
Logo of keytotext
keytotext
Explore the functionality of automated sentence generation through keywords to improve marketing, SEO, and topic development. This project utilizes the T5 model and provides extensive resources including tutorials, API access, and a user-friendly interface made with Streamlit. Enhance content strategies efficiently with cutting-edge natural language processing solutions.
Logo of BERTopic
BERTopic
Explore BERTopic for topic modeling using transformers and c-TF-IDF. It supports supervised and semi-supervised models, offering hierarchical and zero-shot topic solutions. With modular workflows and multi-language support, BERTopic enhances text analysis across diverse datasets, providing rich visualizations and seamless integration.
Logo of huggingface-llama-recipes
huggingface-llama-recipes
Learn to efficiently use Llama 3.1 and 3.2 models, covering everything from installation to customization with datasets. This repository provides insights into local inference, model fine-tuning, and performance optimization, including advanced methods like synthetic data generation and chatbot creation. Benefit from techniques like assisted decoding for enhanced text generation speed. Discover API integration for large models, protection strategies using Llama Guard and Prompt Guard, and RAG pipelines for effective deployment. Designed for both newcomers and experienced machine learning enthusiasts.