#large language models

Logo of llm-engine
llm-engine
LLM Engine is a comprehensive tool for deploying and customizing large language models such as LLaMA, MPT, and Falcon. It supports hosted infrastructure or Kubernetes deployment, offering scalable solutions with ready-to-use APIs, efficient inference, and open-source integrations. Upcoming documentation for K8s installation and cost-effective strategies aim to optimize resources further. Explore the potential of AI models with LLM Engine's detailed guidance and flexible deployment options.
Logo of LMFlow
LMFlow
Offers an inclusive toolbox for efficient finetuning of large-scale machine learning models, accessible to the community while supporting diverse optimizers, conversation templates such as Llama-3 and Phi-3, and advanced techniques like speculative decoding and LISA for memory-efficient training. Recognized with the Best Demo Paper Award at NAACL 2024, it provides essential tools for chatbot deployment and model evaluation, suited for professionals aiming to enhance and deploy large models effectively in an objective and unbiased manner.
Logo of LLM-Training-Puzzles
LLM-Training-Puzzles
Engage with 8 challenging puzzles focused on training large language models using multiple GPUs. This project provides practical exercises in memory efficiency and compute pipelining, crucial for current AI work. An ideal resource for exploring neural network training at scale without extensive infrastructure, accessible via Colab for convenience. This series builds on Sasha Rush's prior works to offer a thorough dive into AI training challenges.
Logo of gpt4all
gpt4all
GPT4All offers a seamless solution for using large language models (LLMs) on personal computers without APIs or GPUs. It supports Windows, macOS, and Linux, with detailed system requirements for performance optimization. Integration with Langchain, Weaviate, and OpenLIT enhances its functionality. Regular updates ensure a cutting-edge experience while promoting open-source collaboration for broad accessibility.
Logo of PromptCraft-Robotics
PromptCraft-Robotics
Join a collaborative initiative to share prompt examples for large language models in robotics. This repository, with a sample robotics simulator integrating ChatGPT, welcomes contributions across diverse robotics fields. Engage with LLMs such as ChatGPT, GPT-3, and Codex. Participate in a community dedicated to evolving robotics and AI interfaces by submitting innovative prompt examples.
Logo of local-rag-example
local-rag-example
Discover a collection of practical examples utilizing local large language models in the Local Assistant Examples repository. Originating from a guide on locally implementing Retrieval-Augmented Generation (RAG) models with Langchain, Ollama, and Streamlit, this repository has evolved to include a wider range of educational materials. Each example, complete with its own README, is structured to facilitate clear understanding and implementation. This project is designed for educational use, providing simplified insights into LLM applications away from production intricacies. Stay informed as new examples become available.
Logo of chie
chie
Discover Chie, a desktop application that supports language models such as ChatGPT and Bing Chat across macOS, Linux, and Windows. Utilizing native UI for each platform, this early-stage app offers extensibility through APIs. It is important to note potential data loss and regular API updates. Licensed under GPLv3, with plans to transition to MIT licensing in 2028, Chie encourages open-source contributions and innovation through forking, with significant contributions requiring a license agreement.
Logo of ontogpt
ontogpt
OntoGPT is a Python package utilizing large language models and ontology-based grounding to extract structured information from text. It supports command line use and features a minimalist web app interface. The package integrates multiple model APIs, such as OpenAI and Azure, by setting API keys, and supports open models through the ollama package to enhance flexibility. OntoGPT effectively converts unstructured text into structured data, aiding biological data management without bias. Its capabilities and assessments are well-documented for verification and reproduction.
Logo of DataChad
DataChad
DataChad V3 revolutionizes data interaction by enabling users to query datasets using state-of-the-art embeddings, vector databases, and language models. Supporting various file types, it constructs detailed knowledge bases and smart FAQs for accurate data retrieval. With local caching for chat history and effortless deployment flexibility, it offers an optimized data exploration tool.
Logo of langchain
langchain
LangChain offers a versatile framework for developing applications with large language models (LLMs), simplifying the application lifecycle with open-source components and seamless integrations. It facilitates the creation of reasoning applications using LangGraph for stateful agents and provides LangChain Expression Language (LCEL) for clear workflow articulation. LangSmith aids in app debugging, while LangGraph Cloud supports deployment, ensuring efficient transition from prototype to production. Comprehensive tutorials and documentation further support developers in leveraging LangChain's full potential.
Logo of InternEvo
InternEvo
InternEvo is an open-source, lightweight framework for model pre-training, minimizing dependency needs. It efficiently supports large-scale GPU cluster training and single GPU fine-tuning, with a near 90% acceleration efficiency across 1024 GPUs. Regularly releasing advanced large language models like the InternLM series, it surpasses many notable open-source LLMs. Installation is straightforward, with support for torch, torch-scatter, and flash-attention for training acceleration. Comprehensive tutorials and tools ensure efficient model development, encouraging community contributions without overstating benefits.
Logo of chat_templates
chat_templates
This repository contains a variety of chat templates designed for instruction-tuned large language models (LLMs), supporting HuggingFace's Transformer library. It includes templates for the latest models like Meta's Llama-3.1 and Google's Gemma-2. These templates can be integrated into applications for enhanced interaction and response generation. Detailed examples and configurations make this resource useful for developers focusing on conversational AI. Contributions to add more templates are encouraged.
Logo of LLaMA-VID
LLaMA-VID
LLaMA-VID enhances the ability of language models to process hour-long videos using an additional context token, thereby extending the functionality of existing frameworks. Developed from LLaVA, the project provides comprehensive resources including models, datasets, and scripts to facilitate tasks from installation to training. The fully fine-tuned models support a diverse range of activities such as short and long video comprehension, innovating the field of contextual video analysis. Noteworthy updates available through ECCV 2024 highlight LLaMA-VID's role as a leading entity in multimodal instruction tuning, advancing the visual embedding and text-guided feature extraction in large language models.
Logo of cody
cody
Cody is an open-source AI coding assistant that enhances coding efficiency by utilizing AI and codebase context. It integrates with IDEs to help write, understand, and fix code. Features include chat-based inquiries, autocomplete, inline edits, and more. Supporting models like Anthropic Claude 3.5 Sonnet and OpenAI GPT-4o, Cody is suitable for individual and enterprise use, providing free model access to developers. It streamlines coding tasks by leveraging code context, enhancing productivity.
Logo of xtuner
xtuner
XTuner is a versatile toolkit for efficiently fine-tuning both language and vision models, compatible with a variety of GPU platforms. It offers support for models like InternLM, Mixtral, Llama, and VLMs such as LLaVA, ensuring flexibility and scalability. With features such as FlashAttention and Triton kernels, XTuner optimizes training processes and integrates seamlessly with DeepSpeed. It supports several training algorithms, including QLoRA and LoRA, and provides a structured data pipeline that accommodates diverse dataset formats. XTuner models are ready for deployment through systems like LMDeploy and can be evaluated with tools such as OpenCompass. Recent updates include support enhancements and installation guidance.
Logo of mlc-llm
mlc-llm
MLC LLM offers a high-performance compiler and deployment engine for large language models, enabling seamless development and optimization across platforms like Linux, macOS, web browsers, and mobile devices. With a unified inference engine and OpenAI-compatible APIs, it supports multiple programming languages and operating systems. Designed to streamline AI model deployment, the project is continuously improved in collaboration with the community to ensure top-tier performance and reliability.
Logo of ChatDev
ChatDev
ChatDev leverages intelligent agents in a virtual software company framework to transform collaborative programming. With its highly customizable, extendable system based on large language models, ChatDev facilitates agents collaboration through Multi-Agent Networks and Experiential Co-Learning, supporting varied organizational functions and enhancing problem-solving across technological domains.
Logo of TypeChat
TypeChat
TypeChat is a library designed to simplify the creation of natural language interfaces by using types instead of complex decision trees. It utilizes schema engineering to enhance interactions with large language models, solving issues like constraining model replies and validating responses. By defining types for application intents, developers can easily build interfaces for applications such as sentiment analysis and shopping carts. TypeChat generates prompts and verifies responses, ensuring they match user intentions. Learn how schema engineering serves as an alternative to prompt engineering, providing a more efficient way to develop reliable language interactions.
Logo of Gepetto
Gepetto
Gepetto is a Python plugin for IDA Pro that uses large language models to analyze decompiled functions, providing function explanations and variable renaming capabilities. Compatible with OpenAI, Groq, Together, and Ollama models, it allows for flexible integration. Installation involves Python setup in IDA and configuring necessary API keys. Users can switch between models and perform tasks with specific hotkeys, improving the analysis workflow. The plugin works with the HexRays decompiler and encourages critical evaluation of AI insights. Gepetto also offers language customization to broaden accessibility.
Logo of sandbox-conversant-lib
sandbox-conversant-lib
Discover a flexible platform for building customizable chatbots utilizing Cohere's advanced language models. Improve interaction with tailored personas, efficient chat history management, and engage through Streamlit demos, suitable for support, sales, and education.
Logo of langboot
langboot
Utilizes LangChain principles to develop AI applications on SpringBoot, incorporating large language models such as OpenAI and ChatGLM2. Employing a diverse tech stack including Java 17+, SpringBoot 3.1.0, MyBatis-Plus, and SSE for streaming communication, the project supports functionalities like voice interaction and comprehensive knowledge management, with capabilities for text and image generation. Future expansions aim to include more language models for various applications, providing a resource-rich environment for developers in the AI space.
Logo of MS-AMP
MS-AMP
Microsoft Automatic Mixed Precision (MS-AMP) is an efficient deep learning package that enhances model performance using automatic mixed precision. The latest version v0.4.0 focuses on optimizing resource allocation and speeding up the training of large models such as FP8-LM. For more detailed information, please visit aka.ms/msamp/doc. Ensure trademark use complies with Microsoft's guidelines. MS-AMP provides practical solutions for advancing AI projects.
Logo of chatpdflike
chatpdflike
Discover a cutting-edge document question-answering tool enabling seamless PDF interactions via natural language queries. Leveraging advanced LLMs like OpenAI's GPT-3.5 Turbo, it delivers precise answers with clear source references. Its user-friendly interface and robust processing ensure efficient, adaptable document analysis. Independently developed, this project is not associated with ChatPDF.
Logo of llama_cpp-rs
llama_cpp-rs
Llama_cpp-rs provides user-friendly and safe Rust bindings to the C++ library, llama.cpp, allowing for seamless execution of GGUF-based language models on CPUs, independent of machine learning expertise. Supporting diverse backends such as CUDA, Vulkan, Metal, and HIP/BLAS, the library is adaptable across various hardware environments. Users can efficiently load models and generate predictions with minimal coding efforts. It also delivers experimental functionalities like memory context size prediction. The project welcomes contributions while emphasizing a clean user experience and optimized performance through Cargo's release builds.
Logo of CLoT
CLoT
The CLoT project investigates the Leap-of-Thought ability in large language models via the Oogiri humor generation game. It examines non-sequential creative thinking, shedding light on humor-based AI applications. Explore newly released datasets and checkpoints on ModelScope and Hugging Face, and visit the project page for updates, including their CVPR 2024 paper.
Logo of Yi
Yi
Yi is an open-source project delivering advanced bilingual language models trained on extensive multilingual data. These models excel in linguistic understanding and reasoning and are noted for their performance on the AlpacaEval Leaderboard. Yi models, distinguished by unique architecture, offer stable integration within various AI frameworks, demonstrating superior precision in both English and Chinese contexts. Suitable for diverse applications such as coding, math, and creative work, these models are ideal for personal, academic, and commercial use.
Logo of ragflow
ragflow
RAGFlow is an open-source engine improving the RAG workflow via deep document understanding. It effectively extracts knowledge from unstructured data using large language models, providing accurate question-answering with reliable citations. Compatible with varied data formats, including Word documents, slides, and scanned files, it features intelligent document chunking to enhance data retrieval. RAGFlow simplifies integration with user-friendly APIs, serving as a dependable tool for both personal and business applications in automating the RAG process.
Logo of LlamaIndexTS
LlamaIndexTS
This project provides a lightweight solution for integrating large language models like OpenAI ChatGPT into applications, with support for diverse JavaScript environments including Node.js, Deno, and Bun. With TypeScript support, users can utilize their own data across various LLMs like OpenAI, Anthropic, and HuggingFace. The framework allows easy setup across platforms such as Next.js and Cloudflare Workers, ensuring compatibility with serverless environments and edge runtimes. Explore core components for efficient data management and querying, enhancing LLM capabilities within your development workflow.
Logo of modelscope-agent
modelscope-agent
Explore an open-source framework designed for the creation of adaptable and scalable agent systems. ModelScope-Agent provides an efficient setup with comprehensive tools and large language model (LLM) interfaces. Its features include role-playing dynamics, tool integration, planning, and memory management, all facilitated through a unified interface. The framework supports extensive customization with rich models and built-in utilities like code interpretation and web browsing. Recent updates have introduced tools like CodexGraph and a Data Science Assistant, illustrating the platform's growing versatility across multiple applications.
Logo of airllm
airllm
AirLLM facilitates large language model operation by minimizing hardware demands. It allows 70B models to run on 4GB GPUs and up to 405B models on 8GB GPUs through advanced model compression, without requiring quantization, distillation, or pruning. Recent updates include support for Llama3, CPU inference, and compatibility with ChatGLM and Qwen models.
Logo of langchain-course
langchain-course
Explore LangChain, a versatile open-source framework for AI app development using large language models like ChatGPT. This course introduces LangChain in four modules blending theory and practice. Participants should have basic Python and JavaScript skills and the course will guide in setting up tools like the OpenAI API Key, making it a perfect opportunity to advance machine learning capabilities within an engaged community. Stay informed with regular updates by subscribing to associated YouTube channels and joining the community Discord server.
Logo of MarkLLM
MarkLLM
Discover MarkLLM, a versatile open-source toolkit for watermarking large language models (LLMs), designed to verify text authenticity and origin. This toolkit features a range of algorithms, customization options, visualization capabilities, and thorough evaluation mechanisms, making it a valuable resource for researchers in AI and language model development.
Logo of humanscript
humanscript
Explore an advanced scripting interpreter utilizing natural language for command execution via LLMs, including support for cloud services like GPT-3.5 and GPT-4 as well as local open-source options. This tool converts intuitive humanscript commands into executable code on-the-fly, ensuring versatile application without predefined rules. It streamlines task organization and productivity enhancements with straightforward installations via Docker or Homebrew, catering to developers seeking simplified automation.
Logo of langchain-examples
langchain-examples
Explore a diverse range of applications utilizing LangChain for large language model capabilities. This collection includes examples like chatbots, document summarization, and generative Q&A, presented through interactive Streamlit apps. Understand AI technologies better through projects demonstrating LLM observability and search queries with APIs such as OpenAI, Chroma, and Pinecone. Perfect for developers and AI researchers looking for practical insights into cutting-edge AI tools.
Logo of LongBench
LongBench
LongBench is an open benchmark evaluating large language models' ability to comprehend long contexts in both Chinese and English. It covers six categories with 21 tasks, such as single- and multi-document QA, and summarization. Featuring a cost-effective automated evaluation process, LongBench assesses models across 14 English tasks, 5 Chinese tasks, and 2 code tasks, with contexts from 5k to 15k words across 4,750 test instances. LongBench-E provides balanced evaluations for different context lengths, aiding in understanding performance variations.
Logo of gpttools
gpttools
gpttools is an R package created to integrate large language models (LLMs) smoothly into R workflows, enabling access to AI services such as OpenAI, HuggingFace, and Anthropic. This package allows users to select models, configure API keys in R, and manage data sharing with a focus on privacy and ethics. It supports diverse models, enhancing applications in text-based knowledge tasks, while ensuring confidentiality of sensitive information.
Logo of langchain-experiments
langchain-experiments
LangChain empowers businesses to harness large language models through innovative applications like searchable video transcripts and intelligent chatbots. Its flexible architecture enables seamless integration with external data sources, fostering efficient data analysis and improved customer experience.
Logo of sqlcoder
sqlcoder
SQLCoder harnesses advanced large language models to transform natural language questions into accurate SQL queries, surpassing GPT-4 in sql-eval framework evaluations. It integrates efficiently with databases and supports multiple hardware platforms, maintaining flexibility and accuracy across SQL categories. SQLCoder models abide by Apache-2 and CC BY-SA 4.0 licenses, promoting community-driven enhancements. Discover its demo, straightforward installation process, and robust training methodologies for superior SQL generation.
Logo of txtchat
txtchat
Txtchat enhances search with RAG and LLMs, transforming information retrieval into dynamic interaction. Integrating with messaging platforms such as Rocket.Chat, it leverages AI for insightful responses. Built on Python 3.8+ and the txtai framework, txtchat supports diverse workflows and personas, offering easy installation and extensibility for conversational AI applications.
Logo of PaddleNLP
PaddleNLP
PaddleNLP, built on the PaddlePaddle framework, offers a robust toolkit for large language model development, enabling efficient training, seamless compression, and high-speed inference across diverse hardware platforms including NVIDIA GPUs and Kunlun XPUs. Designed for industrial-grade applications, it facilitates smooth hardware transitions and reduces development costs with advanced pre-training and fine-tuning strategies. The project’s operator fusion strategies enhance parallel inference speed, applicable in fields like intelligent assistance and content creation.
Logo of DeepBI
DeepBI
DeepBI is an AI-driven platform designed for comprehensive data analysis, using advanced language models to simplify exploration, querying, visualization, and sharing of data from multiple sources. It is engineered to support data-driven decisions through interactive data analysis and query generation. The platform is compatible with databases like MySQL, PostgreSQL, and MongoDB, and operates across major systems such as Windows, Linux, and Mac. With multilingual support in English and Chinese, DeepBI increases its accessibility. Future updates will include automated data analysis reports, expanding its usefulness for businesses searching for efficient data solutions.
Logo of petals
petals
The project allows the execution and fine-tuning of large language models like Llama 3.1, Mixtral, Falcon, and BLOOM, either on home desktops or through Google Colab by leveraging a decentralized network akin to BitTorrent. Users tapping into the global network can experience up to 10 times faster performance in fine-tuning and inference compared to more conventional methods. This open-source initiative encourages community collaboration by sharing computational resources, especially GPUs, broadening capabilities for tasks such as text generation and chatbot applications. Emphasizing privacy, it enables secure data handling through public or private network setups. Detailed guides are accessible for various systems, including Linux, Windows, and macOS, with community support provided via Discord.
Logo of awesome-gpt-prompt-engineering
awesome-gpt-prompt-engineering
This repository provides a broad collection of resources such as roadmaps, guides, techniques, and community connections focused on GPT prompt engineering. It acts as a central point for developers and researchers to improve their expertise in designing effective prompts for GPT and other large language models. Access tools, tutorials, and up-to-date trends in AI to enhance knowledge and usage of prompt engineering across different models and platforms. Keep informed about progress and engage with communities to contribute to this growing field.
Logo of LawBench
LawBench
This page provides an objective overview of LawBench, a benchmark for evaluating large language models (LLMs) in the Chinese legal system. LawBench highlights tasks such as legal entity recognition and crime amount calculation across three cognitive dimensions: memory, understanding, and application. Unique metrics like the waiver rate assess models' legal query responses, with evaluations on 51 LLMs offering insights into multilingual and Chinese LLM performance in various legal contexts.
Logo of vector-vein
vector-vein
Explore a seamless method to create automated workflows using AI. With simple drag-and-drop tools, complex tasks can be managed without programming. Integrate large language models for intelligent operations, and learn about installation, configuration, and API optimization through online tutorials. Experience customizable automation solutions tailored to various tasks.
Logo of langchain-decoded
langchain-decoded
Explore how LangChain enables large language model applications through a detailed series. Covering topics from chatbots and text summarization to code understanding, each section provides clear insights with Python notebooks. Discover LangChain models, embeddings, prompts, indexes, memory, chains, agents, and callbacks, by either forking the repository or using Google Colab. Ideal for developers seeking to leverage open-source tools in machine learning projects.
Logo of prodigy-openai-recipes
prodigy-openai-recipes
The project efficiently demonstrates zero- and few-shot learning using OpenAI models and Prodigy for creating high-quality datasets with minimal annotation. It details the setup of Prodigy for named-entity recognition and text categorization, utilizing OpenAI predictions to build a gold-standard dataset. Task-specific prompt configuration enables precise classification and model training, while addressing imbalanced data and exporting annotations to spaCy or HuggingFace transformers.
Logo of JioNLP
JioNLP
JioNLP provides a versatile Python library designed for Chinese NLP tasks, focusing on ease of use and precise results. Key features include parsing tools for vehicle license plates and time semantics, as well as keyphrase extraction. It supports data augmentation and regex parsing, offering functions for text augmentation and data cleaning. Suitable for developers seeking to enhance Chinese NLP projects with efficient, easy-to-use tools. Easily integrate JioNLP with a simple 'pip install jionlp'.
Logo of levanter
levanter
Explore a framework for training extensive language and foundation models with a focus on readability, scalability, and reproducibility. It's constructed with JAX, Equinox, and Haliax for distributed training on TPUs and GPUs. Enjoy effortless integration with Hugging Face tools and utilize advanced optimizers like Sophia and Optax. Levanter guarantees consistent results across various computing environments, with features like on-demand data preprocessing and robust logging capabilities. Perfect for developers pursuing efficient model development with top-tier benchmarks.
Logo of sglang
sglang
SGLang provides an optimized framework for serving large language and vision models, enhancing interaction speed and control through backend and frontend integration. It incorporates RadixAttention for improved prefix caching, efficient token attention, and parallel processing. The framework supports a wide array of models such as Llama and LLaVA, ensuring flexibility and active community involvement without exaggerated claims.