#LLM
quivr
Leveraging generative AI, this tool provides a robust RAG framework that is fast, efficient, and customizable. It is capable of integrating with various LLMs and supports multiple file types, facilitating enhanced workflows through internet search capabilities and additional tools. The quivr-core allows easy file ingestion and inquiry, designed to optimize focus on product development. This tool is ideal for those seeking innovative approaches to data retrieval and management.
Firefly
Firefly is a versatile tool for training large models, offering pre-training, instruction fine-tuning, and DPO functionality for a broad range of popular models, including Llama3 and Vicuna. It employs methodologies such as full parameter tuning, LoRA, and QLoRA for efficient resource usage, catering to users with limited computing power. Its user-friendly approach allows for straightforward model training with optimized configurations to minimize memory and time consumption. Discover open-source model weights and benefit from proven methods, achieving notable improvements in the Open LLM Leaderboard.
skyagi
SkyAGI utilizes Large Language Models to simulate human-like behaviors in NPCs, offering a sophisticated approach to role-playing game interactions. By featuring characters from 'The Big Bang Theory' and 'The Avengers', it enhances the realism and depth of gaming experiences. Users can easily customize character configurations and install through PyPI or Make commands. SkyAGI's NPCs exhibit memory and story progression in dialogues, enhancing gameplay in a novel way.
restai
RestAI provides an AI as a Service platform, allowing creation and consumption of AI projects through a REST API. It supports various agent types and includes features like robust authentication, automatic VRAM management, and integration of public and local LLMs. The platform includes tools for visual projects with Stable Diffusion and LLaVA, and a router to guide inquiries to appropriate projects. Accompanied by a documented API, a user-friendly frontend, and Docker-based orchestration, RestAI offers an efficient setup for effective AI service deployment.
danswer
Danswer is an open-source AI Assistant that enhances productivity through streamlined integration with company tools and documents. It offers both Chat and Unified Search functionalities and is compatible with various environments from local to cloud. Designed to be secure and customizable, it supports user authentication and role management, providing access to team-specific knowledge and making it suitable for organizations of any size.
aiac
This tool significantly enhances how developers create Infrastructure as Code (IaC) templates, configurations, and utilities by integrating with advanced LLM providers such as OpenAI, Amazon Bedrock, and Ollama. It supports a variety of backends for diverse environment customization. With its command line interface, users can efficiently generate robust templates, optimize CI/CD pipelines, and develop policies as code. The tool is accessible through installation options like Homebrew, Docker, and Go, while supporting configurations via a TOML file. Whether it's implementing IaC for AWS, building secure deployment files, or creating efficient query utilities, this tool offers exceptional adaptability and efficiency.
open-assistant-api
Open Assistant API is a versatile open-source AI assistant designed for local deployment and integration with commercial and private models. It supports features like One API, R2R RAG engine, and internet search for scalable AI solutions. Compatible with OpenAI interfaces, it offers extensive model support beyond GPT and allows easy customization.
NExT-GPT
The project presents a versatile multimodal model that processes and generates various output types, including text, images, videos, and audio. It utilizes pre-trained models and advanced diffusion technology to enhance semantic understanding and multimodal content generation. Recent updates include the release of code and datasets, supporting further research and development. Developers can customize NExT-GPT with flexible datasets and model frameworks. Instruction tuning strengthens its performance across different tasks, making it a solid foundation for AI research.
ell
Ell is a Bash-written CLI tool for seamless interaction with LLMs directly in the terminal. It offers features like pipe-friendly commands, terminal context integration, and interactive communication. Simple installation and support for custom templates and plugins provide flexibility for varied requirements.
ChatPilot
ChatPilot is a web-based Chat Agent WebUI facilitating seamless integration with Google Search, file URL dialogues, and code execution. It supports various LLM models such as OpenAI and Azure through API access, with a FastAPI backend and Svelte frontend enabling voice and image functionalities. Suitable for managing user permissions and importing/exporting chat histories.
what-llm-to-use
Discover how developers navigate the fast-paced DevAI sector by choosing between open-source and commercial LLMs for software development. This guide details vital factors, practical use cases, and top models including Code Llama, GPT-4, and Claude 2. Learn about deployment strategies and key considerations to optimize development workflows with the most suitable LLMs.
simpleAI
Discover a flexible self-hosted platform enabling text completion, chat, edits, and embeddings experimentation without dependency on specific APIs. Setup with Python 3.9+, optimized with gRPC, and easily integrate diverse models, offering quick deployment with comprehensive guidance.
langchain-benchmarks
LangChain Benchmarks provides an organized framework for evaluating LLM-related tasks, with a focus on end-to-end use cases and integration with LangSmith. Users can gain insights into dataset collection and task evaluation methods. The project also encourages community contribution for improving benchmarking techniques, with results discussed in blogs. Access detailed tool usage documentation, learn how to recreate benchmarks, and explore historical data through archived benchmarks for deeper insights.
funNLP
This extensive repository offers a curated collection of open-source GitHub packages focused on Chinese NLP. Covering topics like ChatGPT model evaluations, multi-modal models, and domain applications, it serves as a comprehensive toolkit. The repository is designed for ease of access and encourages community contributions with regular updates.
litellm
LiteLLM provides integration with various large language model APIs such as OpenAI, Azure, and Huggingface, using a standardized format. It delivers consistent outputs and includes features for retries and fallbacks. The platform offers budget and rate limit management, spending tracking, and provider translation support for tasks like completion, embedding, and image generation. LiteLLM features streaming, asynchronous operations, and robust logging, suitable for enterprise settings. The proxy server enhances load balancing and authorization management for efficient, secure deployments.
generative-ai-use-cases-jp
Generative AI Use Cases JP offers secure applications for business transformation through generative AI. The repository includes chat, text generation, and translation solutions using advanced language models to meet business requirements. With browser extensions and AWS integrations, including Amazon Bedrock and Amazon Kendra, processes are streamlined. Discover capabilities like RAG chat and video analysis, demonstrated by companies like Yasashii Te and Salsonido for operational benefits.
farfalle
Farfalle is an AI-driven, open-source search engine featuring local and cloud language model integration such as llama3 and GPT-4. It supports custom models via LiteLLM and interfaces with APIs like Tavily and Bing. Built on FastAPI and Next.js, Farfalle offers efficient deployment options including Docker.
kor
Kor provides a system for extracting structured data from text using Language Model Models (LLMs). Users can define extraction schemas and provide examples to enhance results. Kor works with various LLMs, specializing in prompt-based and parsing methods, and is compatible with the LangChain framework for seamless integration. It supports pydantic versions 1 and 2, ensuring robust schema validation. Kor is suitable for integrating with AI assistants and enabling natural language interaction with APIs, although it has limitations with large prompts and text inputs.
langchain-extract
LangChain Extract provides an effective solution for extracting text and file data using FastAPI, LangChain, and PostgreSQL. It features a FastAPI web server that supports the creation of customizable extractors through JSON Schema. With integration into LangChain, it enhances data processing capabilities, suitable for various data extraction scenarios. The REST API and OpenAPI Documentation facilitate ease of access, while the demo service and continuous development highlight its viability for creating custom applications.
magicoder
Explore Magicoder's unique method of using open-source code via OSS-Instruct to improve Large Language Models, generating diverse and low-bias instruction data. Models excel in HumanEval benchmarks and are accessible via Gradio demos, showcasing practical applications. Evaluate robust APIs supporting various coding tasks and performance metrics available on the EvalPlus Leaderboard.
search_with_lepton
Lepton simplifies creating a conversational search engine with under 500 lines, including built-in LLM support and integration with Bing and Google. Its customizable UI and shareable results bolster user interaction. Setup requires basic steps like library installation and API acquisition. Ideal for developers looking for efficient search engine solutions.
magpie
Explore an innovative approach to synthesizing quality alignment data using aligned language models without prompt engineering. This method utilizes pre-query templates for enhanced data generation, creating user queries and model responses for comprehensive alignment datasets that improve model performance. Discover recent updates and datasets like Qwen2.5 and Magpie Llama-3.1, demonstrating state-of-the-art performance, and learn about the methodologies that provide open access to AI alignment processes.
awesome-langchain
Explore a carefully curated array of tools, projects, and resources built on the LangChain framework. This evolving ecosystem delivers solutions across domains like low-code platforms, services, agents, and templates. Learn through open-source projects and stay informed with regular newsletters. Engage with the community by contributing insights or initiating discussions as per guidelines. Perfect for developers aiming to optimize their LLM projects with LangChain.
instructor
Discover Instructor, a key Python library for managing structured outputs from LLMs. Downloaded over 600,000 times monthly, it offers a Pydantic-based API for easy validation and management. Integrate smoothly with various LLM providers beyond OpenAI and support languages like Python, TypeScript, Ruby, Go, and Elixir. Features include straightforward response model configuration, flexible backend options, and advanced hooks for process monitoring. Enhance your LLM workflows with ease of installation and usage.
Awesome-LLM-Survey
Discover an extensive compilation of surveys on Large Language Models, addressing critical areas such as instruction tuning, human alignment, and multi-modal integrations. Understand challenges like hallucination and compression, with insights into their applications in domains like health, finance, and others. A valuable tool for researchers involved in LLM development.
CipherChat
The CipherChat framework evaluates the generalizability of safety protocols in AI through the use of ciphers in non-natural languages. By teaching language models to comprehend and process ciphered inputs, it potentially bypasses traditional safety measures. The study includes comprehensive evaluations, demonstrating effective input transformation and post-processing decoding while minimally affecting established safety alignments. Extensive results and case studies are presented to further research in cipher utilization for AI safety. For detailed insights, refer to the ICLR 2024 publication.
Scrapegraph-ai
ScrapeGraphAI is an open-source Python library designed for efficient data extraction using language models and graph logic. It supports extraction from both websites and local files such as XML, HTML, and JSON. The library offers flexible pipeline creation for various scraping needs, additional language model integrations, and advanced semantic processing tools. Easy to install via PyPI, it also provides features for script generation and audio output. Enhanced by OpenAI support and local model options, it serves as a versatile solution for web scraping tasks.
openui
OpenUI allows developers to visualize and prototype interactive user interfaces in real-time. With support for conversion to multiple frameworks like HTML, React, and Svelte, and compatibility with models such as OpenAI and Groq, it offers versatile integration options. Experience the live demo or configure locally with Docker for efficient setup.
HuixiangDou
The HuixiangDou project provides a versatile knowledge assistant leveraging LLM technology, with well-defined pipelines for data preprocessing, query rejection, and response generation. It is specifically designed for efficient group chats and real-time streaming communication, without requiring extensive training. The system supports several configurations, ranging from CPU-only to 80G GPU setups, and is suitable for deployment across web platforms, Android devices, and industrial applications. Additionally, it offers integration capabilities with platforms like Feishu and WeChat, includes a comprehensive web front and back end, and supports a wide range of file formats and retrieval methods such as knowledge graphs and internet searching. Designed for business use, it facilitates fluid interaction through various integrations.
smartgpt
SmartGPT is an experimental program that enables large language models (LLMs) to execute complex tasks independently by breaking them into smaller problems and utilizing internet resources. Its modular architecture supports plugins and customizable configurations via a 'config.yml' file. While it faces challenges in ecosystem stability and memory systems, it focuses on consistent execution with dynamic action chaining. The project continues to evolve with aims to incorporate advanced models like GPT-4, welcoming contributions from the developer community.
rag-gpt
Efficiently set up a sophisticated customer service system using Flask, LLM, and RAG within minutes. This comprehensive solution includes both frontend and backend, compatible with cloud and on-premises LLMs, providing a customizable and straightforward interface. It adeptly handles various knowledge bases, such as websites, isolated URLs, and local files. Improve interactions with a customizable UI and simplified management through an admin console. Access a live demo with detailed guidance for smooth deployment.
graphrag
GraphRAG is a data pipeline designed to convert unstructured text into structured data using LLMs. It enhances data interpretation through knowledge graph structures, offering a methodology for analyzing private narrative data. The GraphRAG Accelerator allows integration with Azure, simplifying deployment. While effective, managing GraphRAG's indexing is crucial as it can be costly. For optimal performance, it is advisable to use prompt tuning and follow given guidelines. Microsoft Research's blog and GitHub discussions provide further insights.
RepoAgent
Leverage an AI-driven framework to automate documentation processes at the repository level. RepoAgent effectively tracks changes, analyzes code, and offers detailed insights, using Large Language Models like GPT. This reduces manual workload and allows developers to prioritize key tasks such as verification. Enjoy efficient documentation with features like seamless Markdown updates, precise code relationship mapping, and improved concurrency, supporting sustainable team collaboration, particularly suitable for fast-evolving projects.
promptflow
Optimize AI development with comprehensive tools supporting the full cycle from ideation to deployment. These tools facilitate prompt engineering, iterative development, and performance evaluation for LLM applications. Seamlessly integrate with CI/CD workflows and collaborate with Azure AI for enhanced production efficiency.
openai
Discover a carefully curated collection of GPT and LLM tricks for developers. This includes techniques for handling asynchronous API requests, crafting automatic cold mails, compressing text with GPT-4, and identifying objects in images using natural language. You can also learn how to run local LLMs and utilize function calling with OpenAI's API. Perfect for developers aiming to elevate their AI projects and improve efficiency with cutting-edge tools.
tanuki.py
Tanuki enables smooth integration of LLM-enhanced functions into Python applications, focusing on reliability and type-safety. It offers simple implementation with lower costs and latency, supported by numerous well-known models. Tanuki automates model distillation and implements test-driven alignment, simplifying functionality enhancement without managing prompts. Experience scalable improvements and up to 90% cost savings, suitable for developers requiring efficient, structured output from LLMs across various applications.
oatmeal
Oatmeal provides a terminal interface for interacting with large language models via various backends, including ChatGPT and Ollama, and supports editor integration like Neovim. The tool features customizable themes, session management, and supports installation on Windows, MacOS, and Linux, with extensive configuration for personalized operations. With ongoing community contributions and issue tracking, Oatmeal focuses on stability and continuous improvement.
LangChain-for-LLM-Application-Development
The course provides a comprehensive guide for developers interested in utilizing the LangChain framework to enhance language model applications. Topics covered include interacting with LLMs, crafting prompts, utilizing memory capabilities, and constructing operational chains to improve reasoning and data processing. Hosted by LangChain creator Harrison Chase and Andrew Ng, the course facilitates rapid development of dynamic applications, optimizing proprietary data use within just an hour.
ChainForge
This open-source platform simplifies comparative prompt engineering and LLM response evaluation. It enables users to simultaneously query multiple LLMs, offering quick comparisons in response quality across various prompts and models. Supporting model providers like OpenAI and Google PaLM2, the platform provides robust tools for setting evaluation metrics and visualizing results. With features like prompt permutations, chat turns, and evaluation nodes, it facilitates a thorough analysis of prompt and model efficiency. Encouraging experimentation and sharing, it includes functionalities for exporting results and integrating evaluations into research projects, making it a practical tool for researchers.
FastGPT
Explore FastGPT, an AI-driven knowledge base and Q&A system utilizing large language models for efficient data handling and model use. It features visual workflow tools, supporting complex question-answering and various document formats. Incorporating OpenAPI, voice input/output, and a friendly interface, FastGPT provides versatile interaction and effective knowledge management.
OpenNMT-py
OpenNMT-py is an open-source platform facilitating neural machine translation and language modeling. It supports various NLP tasks and is suitable for production environments. Features include model quantization, integration with PyTorch, and GPU-acceleration for large models. Comprehensive tutorials and documentation assist developers in implementing NLP solutions.
autolabel
Autolabel is a Python tool optimizing text dataset processing with versatile Large Language Models (LLMs) including GPT-4, providing high accuracy while reducing time and costs compared to manual methods. It easily integrates with multiple LLM platforms and supports model benchmarking on Refuel's system. Designed for efficiency, Autolabel offers confidence estimation, caching, and state management, facilitating precise calibration for labeling tasks. The easy three-step setup enhances accessibility for NLP tasks like sentiment analysis, classification, and named entity recognition.
Awesome-Chinese-LLM
Explore a diverse collection of over 100 Chinese language models, applications, datasets, and tutorials. This project showcases notable models such as ChatGLM, LLaMA, and Qwen, offering resources for various technical requirements. It serves as a collaborative platform for sharing open-source models and applications, fostering a broad resource hub in the evolving field of Chinese language models from development to deployment and learning materials.
RAG-Retrieval
RAG-Retrieval provides a consistent framework for fine-tuning and inference across a range of RAG retrieval models, allowing seamless integration of open-source embeddings and rerankers. It simplifies model handling from LLM to BERT-based systems and offers an easy-to-use, extensible interface ideal for enhancing RAG applications, particularly for managing long documents.
ipex-llm
Explore a library designed for accelerating LLMs on Intel CPUs, GPUs, and NPUs. Seamlessly integrating with frameworks such as transformers and vLLM, it optimizes over 70 models for better performance. Latest updates feature GraphRAG support on GPUs and comprehensive multimodal capabilities like StableDiffusion. With low-bit optimizations, it enhances processing efficiency on Intel hardware for large models. Discover new LLM finetuning and pipeline parallel inference advancements with ipex-llm.
llm
Explore modern Rust libraries for effective LLM inference as the ‘llm’ project is archived. Consider Ratchet for web ML or Candle for versatile models. Evaluate wrappers like drama_llama and API aggregators for seamless integration in the Rust ecosystem.
plotai
Leverage PlotAI to generate Python plots in Matplotlib using large language models (LLMs). This tool simplifies the plotting process by generating Python code from DataFrame inputs and prompts, suitable for use in Jupyter and Colab notebooks or standalone scripts. PlotAI supports OpenAI models such as 'gpt-3.5-turbo' and 'gpt-4', offering a quick setup via 'pip install plotai'. Although experimental, it represents a significant advancement in automating data visualization, with precautions advised regarding privacy and data safety.
FlashRank
FlashRank is a lightweight and fast Python library that enhances search pipeline efficiency with state-of-the-art re-ranking features. It supports pairwise and listwise re-rankers using LLMs and cross-encoders, optimizing ranking precision with compact models starting at 4MB. It's designed to work in serverless environments, providing competitive, resource-efficient performance across various languages.
llm-paper-daily
Stay informed on the latest in LLM research with our platform's daily updates. Access recent papers with arXiv links, GitHub repositories, and concise summaries. Navigate through categories including reasoning, agents, retrieval, and more for easy discovery. Engage in a forthcoming community discussion group to exchange insights and learn about large model implementations. Stay at the forefront of LLM advancements.
learning-llms-and-genai-for-dev-sec-ops
This resource provides structured insights on Large Language Models (LLMs) and Generative AI (GenAI) for development, security, and operations fields. Covering topics like OpenAI integration, debugging, prompt templates, and security strategies, it uses the Langchain framework to offer practical examples and exercises. Developed during a GenAI hackathon and enhanced through community collaboration, this repository supports continual learning and practical application across technical domains.
Feedback Email: [email protected]