#Language Models

Logo of ReAct
ReAct
Understand how ReAct can enhance reasoning and acting capabilities in AI language models, as detailed in the ICLR 2023 paper. Learn implementation methods with OpenAI's GPT-3 via LangChain's zero-shot ReAct Agent. Review experimental results on datasets such as HotpotQA and FEVER, highlighting the performance differences between PaLM and GPT-3. Setup requires API key configuration and installation of packages 'openai' and 'alfworld'. Gain insights and metrics for advancing AI model capabilities efficiently.
Logo of Prompt-Engineering-Guide
Prompt-Engineering-Guide
Explore the evolving field of prompt engineering, which optimizes prompts for language models. This guide provides an extensive range of resources like papers, lectures, and tools to expand your understanding of the capabilities and limitations of large language models in tasks such as question answering and reasoning. Discover effective prompting techniques and self-paced courses at DAIR.AI Academy. Stay informed with multi-language support and a community of over 3 million learners. Access the guide online or locally.
Logo of LLM-Pruner
LLM-Pruner
Explore LLM-Pruner, an efficient tool for structurally pruning large language models with minimal data. Supports models like Llama, Vicuna, and BLOOM, focusing on preserving multi-task ability and enhancing performance, now including GQA and Llama3 series.
Logo of attention_sinks
attention_sinks
Discover how attention_sinks enhances large language models to sustain fluent text generation with consistent VRAM usage. This method excels in applications requiring endless text generation without model retraining.
Logo of pretraining-with-human-feedback
pretraining-with-human-feedback
Examine how human preferences are incorporated into language model pretraining via Hugging Face Transformers. This method uses annotations and feedback to align models with human values, enhancing their ability to reduce toxicity and meet compliance standards. Learn about methods, configurations, and available pretrained models for tasks including toxicity management, PII detection, and PEP8 adherence, documented using wandb. Leverage this codebase to refine models for better processing of language aligned with human expectations.
Logo of prometheus-eval
prometheus-eval
Prometheus-Eval is an open-source project offering sophisticated tools for the evaluation of language models, including the BiGGen-Bench with 77 tasks and 765 instances. It features Prometheus 2 models, such as the efficient 8x7B version, supporting both absolute and relative grading, aligned closely with human review standards. The project facilitates both local and API-based inference to ensure flexible assessment processes, providing robust and expandable tools for contemporary AI evaluation needs.
Logo of Binder
Binder
Binder integrates language models into symbolic languages, achieving high performance with minimal annotations. The project, regularly updated for new model capabilities such as OpenAI’s GPT series, ensures state-of-the-art results in natural language processing. Recognized at ICLR 2023, it offers resources like code, a demo page, and a project site for user engagement. Binder is designed for easy adaptation and implementation, serving both beginners and experienced users in the machine learning community.
Logo of gpt-2
gpt-2
Gain understanding of the archived GPT-2 project from OpenAI, known for introducing unsupervised multitask language models. This resource offers code, models, and a dataset useful for researchers and engineers studying model behavior. Learn about key concerns like model robustness and biases, and why careful use is critical in safety-sensitive applications. Opportunities exist for contributions in studying bias reduction and synthetic text detection, building on the foundational work for future progress.
Logo of nlp-journey
nlp-journey
Discover a wide array of deep learning and natural language processing resources, including key books, notable research papers, informative articles, and crucial GitHub repositories. Topics include transformer models, pre-training, text classification, and large language models. Ideal for developers, researchers, and enthusiasts to expand their knowledge of NLP developments.
Logo of LLM-Adapters
LLM-Adapters
Discover a framework designed for efficient fine-tuning of advanced large language models via a variety of adapter techniques. This framework, supporting models such as LLaMa, OPT, BLOOM, and GPT-J, enables parameter-efficient learning across multiple tasks. It offers compatibility with adapter methods like LoRA, AdapterH, and Parallel adapters, enhancing the performance of NLP applications. Keep informed about the latest achievements, including surpassing benchmarks such as ChatGPT in commonsense and mathematics evaluations.
Logo of autollm
autollm
AutoLLM provides a unified API for deploying applications with over 100 language models, allowing for straightforward cost calculations and installations. It supports over 20 vector databases and simplifies FastAPI app creation with single-line code implementation. The platform includes productivity tools such as a 1-line RAG LLM engine and automated cost calculations, supporting efficient project management. Resources including video tutorials, blog posts, and comprehensive documentation ease the transition from platforms like LlamaIndex, making it a top choice for developers looking for efficient and scalable solutions.
Logo of inference
inference
The Xorbits Inference library streamlines deployment and management for advanced language, speech recognition, and multimodal models. It offers seamless hardware integration and flexible resource utilization, supporting state-of-the-art open-source models. Designed for developers and data scientists, Xorbits Inference facilitates deployment with a single command and supports distributed deployment alongside various interfaces like RESTful API, CLI, and WebUI. Stay updated with recent framework enhancements and models through extensive documentation and community support.
Logo of awesome-huge-models
awesome-huge-models
Explore the growing field of large AI models after GPT-4, where collaborations on GitHub are overtaking traditional research. This page focuses on open-source developments in Large Language Models (LLMs) and includes accessible training and inference codes and datasets. Keep informed about LLM advancements in language, vision, speech, and science, and examine surveys and major model releases to gain insights into modern AI model architecture and licenses.
Logo of LangChain-for-LLM-Application-Development
LangChain-for-LLM-Application-Development
The course provides a comprehensive guide for developers interested in utilizing the LangChain framework to enhance language model applications. Topics covered include interacting with LLMs, crafting prompts, utilizing memory capabilities, and constructing operational chains to improve reasoning and data processing. Hosted by LangChain creator Harrison Chase and Andrew Ng, the course facilitates rapid development of dynamic applications, optimizing proprietary data use within just an hour.
Logo of mPLUG-Owl
mPLUG-Owl
Examine the progressive developments in multi-modal large language models achieved by mPLUG-Owl. This family utilizes modular architecture to boost multimodality. Stay informed on the advancements of mPLUG-Owl3, which emphasizes long image-sequence comprehension, and note mPLUG-Owl2's CVPR 2024 accolade. Gain insight into the enriched features of the Chinese version, mPLUG-Owl2.1, which collectively contribute to advancing AI linguistic capabilities.
Logo of fromage
fromage
Explore FROMAGe, a versatile framework that connects language models to images, enhancing multimodal input and output capabilities. It offers pretrained model weights and extensive documentation for seamless image retrieval and contextual understanding. The repository includes essential code for replicating image-text alignment tasks using Conceptual Captions datasets. FROMAGe excels in image generation and retrieval, supported by thorough evaluation scripts. Built for flexibility, it supports multiple visual model settings and reduces disk usage via model weight pruning. Try the interactive Gradio demo for practical insights.
Logo of toolformer-pytorch
toolformer-pytorch
Discover Toolformer in PyTorch, a MetaAI initiative that integrates API tool utilization for language models. This project showcases transformer training with API calls to improve output efficiency, featuring examples with PaLM and a unique fitness scoring method. Designed for AI researchers advancing language model capabilities through tool integration.
Logo of hh-rlhf
hh-rlhf
This repository provides crucial datasets for AI safety research, including human preference data on the helpfulness and harmlessness of language models and red teaming data designed to mitigate harmful effects. The insights from the data utilize human feedback to inform safer model training methodologies, featuring detailed JSONL files with paired preference texts and comprehensive records of adversarial interactions. The datasets serve researchers focused on model behavior and AI ethics, addressing concerns like discriminatory language and self-harm. Engage with datasets drawn from key studies to propel advanced research into AI safety and performance.
Logo of Awesome-Tool-Learning
Awesome-Tool-Learning
Discover a comprehensive collection of tool learning and AI resources, featuring surveys, fine-tuning methods, and in-context learning improvements. Uncover practical applications like Auto-GPT and LangChain. Regular updates make it a valuable resource for developers and researchers focused on advancing AI tool integration.
Logo of chatbox
chatbox
Discover the open-source Chatbox Community Edition, an AI chat client compatible with ChatGPT, Claude, and more across Windows, Mac, and Linux. Licensed under GPLv3, it ensures local data storage and easy installation. Key features include Dall-E-3 image generation, advanced prompting, and Markdown. Benefit from its intuitive UI, team collaboration, and availability on iOS, Android, and web.
Logo of languagemodels
languagemodels
The Python package allows efficient use of large language models on systems with only 512MB RAM, facilitating tasks such as instruction following and semantic search with data privacy. It enhances performance through GPU acceleration and int8 quantization. Ideal for developing chatbots, accessing real-time information, and educational purposes, the package is easy to install and suited for both learners and professionals, supporting educational and potential commercial use cases.
Logo of Awesome-AGI
Awesome-AGI
Explore a curated list of AGI frameworks, software, and resources focused on learning, reasoning, and problem-solving in multiple domains. This collection is invaluable for those exploring Artificial General Intelligence and its industry-transforming potential in areas such as healthcare, finance, and education, while also considering ethical and societal challenges. Discover tools including Auto-GPT, AgentGPT, MetaGPT, among others, aimed at enhancing AI autonomy and capabilities, complemented by informative papers and guides to expand AGI expertise.
Logo of inseq
inseq
Explore a versatile tool designed to facilitate interpretability analysis for sequence generation models using PyTorch. This guide provides insights into installation, application, and the features offered, catering to Python enthusiasts. Understand feature attribution for multiple models through methods such as Integrated Gradients and Attention Weight Attribution. Visualize results seamlessly in Jupyter notebooks or the console, using custom scores to gain comprehensive insights. Inseq simplifies post-hoc model analysis, fostering enhanced understanding and innovation in sequence generation.
Logo of Prompt-Engineering-Guide-zh-CN
Prompt-Engineering-Guide-zh-CN
This guide offers an in-depth look at prompt engineering, essential for the enhancement and effective application of large language models (LLMs). It features the newest research findings, educational resources, and tools to improve LLM performance in areas such as question answering and complex reasoning. Continuously updated with fresh content and collaborations, this resource serves both researchers and developers interested in mastering advanced prompt engineering.
Logo of evals
evals
OpenAI's evals provide a comprehensive setup for testing large language models, featuring pre-built and customizable evals for various use cases. Develop private evaluations aligned with recurring LLM patterns using personal data. This setup includes guides for OpenAI API key integration and eval execution instructions. Log results to Snowflake databases and adjust evaluation logic via GitHub. While custom code is restricted, model-graded evals can be submitted and reviewed for future improvements.
Logo of PythonProgrammingPuzzles
PythonProgrammingPuzzles
Explore a diverse range of Python puzzles aimed at evaluating AI programming skills across different difficulty levels and domains. Learn from code solutions created by OpenAI's Codex and assess AI's performance. Engage with the community by contributing or discovering new puzzles, and uncover the AI potential in solving complex tasks using an unambiguous code-based approach that aids in precise training and evaluation.
Logo of build-nanogpt
build-nanogpt
The build-nanoGPT project presents a clear walkthrough for constructing language models, specifically GPT-2 from scratch. It provides a step-by-step guide with video support, helping users grasp the entire process efficiently in about an hour with affordable resources. Note: Finetuning for interactive abilities isn't covered; community discussions are encouraged on GitHub and Discord.
Logo of dspy
dspy
Explore a framework designed to systematically optimize language model prompts and weights, significantly improving the efficiency of multi-stage language models. The system separates program flow from parameters, utilizing LM-driven algorithms to adjust prompts and weights according to specific metrics. It is compatible with models such as GPT-3.5 and GPT-4, offering consistently reliable and high-quality results and minimizing manual prompting. Ideal for those looking for a structured, iterative approach to building complex language model systems.
Logo of ollama
ollama
Ollama provides an easy-to-use framework for deploying and customizing large language models across macOS, Windows, and Linux. With a library featuring models like Llama and Phi, Ollama supports local execution and Docker deployment. Import models from GGUF, PyTorch, or Safetensors, and enjoy simple integration with prompt adjustments and model file configurations. The project also offers community-driven integrations and comprehensive documentation for APIs and command-line usage, making it perfect for developers who want to seamlessly harness the power of language models.
Logo of llms_paper
llms_paper
This repository offers a comprehensive review of significant conference papers in LLMs, focusing on multimodal models, PEFT, low-shot QA, RAG, LMMs interpretability, agents, and CoT. It provides valuable resources such as 'LLMs Nine-Story Demon Tower' and 'LLMs Interview Notes'. Detailed insights into numerous series, including Gemini and GPT-4V evaluations, help AI algorithm engineers bridge essential understanding. Explore methods and advances in LLMs, NLP, and recommendation systems across sectors like healthcare and law, highlighting practical applications.
Logo of Awesome-Code-LLM
Awesome-Code-LLM
This objective survey examines the intersection of NLP and software engineering via language models for code. It presents a chronological categorization of research papers, providing insights into basic language models, their adaptations for code, and pretraining methods. Key topics covered include reinforcement learning on code, analysis of AI-generated code, low-resource languages, and practical tasks such as code translation and program repair. Additionally, the survey includes recommended readings for those new to NLP, and updates on notable papers, serving as a valuable resource for understanding developments and uses of large language models in code-related fields.
Logo of Awesome-Prompt-Engineering
Awesome-Prompt-Engineering
This repository provides a comprehensive collection of Prompt Engineering resources covering Generative Pre-trained Transformers like GPT, ChatGPT, and PaLM. The collection includes papers, tools, datasets, models, and community insights curated for practical and educational use. This guide is suitable for researchers, educators, and developers interested in mastering Prompt Engineering techniques.
Logo of Online-RLHF
Online-RLHF
This project offers a detailed guide to Online Iterative RLHF, a cutting-edge method proven more effective than offline methods. The open-source workflow allows reproduction of advanced LLMs using only open-source data, achieving results on par with or better than LLaMA3-8B-instruct. It includes comprehensive setup instructions covering fine-tuning, reward modeling, data generation, and iterative training.
Logo of typechat.net
typechat.net
TypeChat.NET provides cross-platform libraries aimed at developing natural language interfaces in .NET applications using strong types. This approach enhances determinism and reliability, while ongoing updates improve type validation and Semantic Kernel integration. The framework translates user intent into JSON and executable code, offering features like schema export and extensibility. Compatible with OpenAI models, it facilitates structured, type-safe interactions and dynamic adaptability without subjective emphasis.
Logo of LMaaS-Papers
LMaaS-Papers
Explore the curated list of Language-Model-as-a-Service (LMaaS) papers designed for NLP researchers. Understand how pre-trained large language models, such as GPT-3, are provided as services, focusing on deployment efficiencies and tuning methods that don't require access to model parameters. Investigate strategies like text prompts, in-context learning, black-box optimization, and feature-based learning for task-specific customization. Suitable for researchers aiming to enhance LLM utility across varied applications and foster flexible research in linguistic fields.
Logo of arxiv-translator
arxiv-translator
The Arxiv Translator project transforms ArXiv papers into Korean using Nougat OCR, offering quicker access to new academic papers. Departing from Ar5iv's method due to update delays, this tool extracts and presents papers independently, enhancing accessibility. While translations aid understanding, original papers are recommended for detailed insights. Users can navigate a comprehensive list of translated works linked to their specific ArXiv pages.
Logo of SPIN
SPIN
SPIN uses a self-play mechanism to improve language models, enabling self-enhancement through iteration. It generates training data from past iterations to refine model strategies and excels over models trained via direct preference optimization. SPIN achieves enhancements without needing extra human-annotated data beyond what's required for supervised fine-tuning. The method is theoretically grounded and validated on multiple benchmarks, ensuring data distribution alignment. Detailed setup guides and open-source availability aid replication and further exploration.