#Large Language Models

Logo of knowledge
knowledge
Knowledge enables effective data management and interaction, utilizing cutting-edge Large Language Models for interactive learning. Its integrated Chat feature transforms data engagement, while the built-in Chromium browser facilitates easy content summarization and note creation. This tool serves as a versatile resource for both complex topic exploration and everyday browsing needs.
Logo of text-generation-inference
text-generation-inference
Text Generation Inference facilitates the efficient deployment of Large Language Models like Llama and GPT-NeoX. It enhances performance with features such as Tensor Parallelism and token streaming, supporting hardware from Nvidia to Google TPU. Key optimizations include Flash Attention and quantization. It also supports customization options and distributed tracing for robust production use.
Logo of LLamaSharp
LLamaSharp
LLamaSharp is a versatile library offering efficient inference of LLaMA and LLaVA models across platforms on local devices, leveraging CPU and GPU capabilities. Its high-level APIs and RAG support facilitate seamless integration of large language models. With a variety of backends such as CUDA and Vulkan, LLamaSharp eases deployment without requiring native library compilation. It integrates well with libraries like semantic-kernel, and its comprehensive documentation assists in developing AI solutions.
Logo of oss-fuzz-gen
oss-fuzz-gen
The framework uses large language models for generating and assessing fuzz targets in C/C++/Java/Python projects via OSS-Fuzz. Supporting models like Vertex AI and GPT, it emphasizes crucial metrics like compilability and runtime coverage. It reports 26 vulnerabilities from over 1300 open-source benchmarks, highlighting collaboration for enhanced testing in the open-source community.
Logo of NeumAI
NeumAI
Neum AI is a data platform designed to assist developers in utilizing their data more effectively by contextualizing Large Language Models through Retrieval Augmented Generation (RAG). It streamlines data extraction and processing from various sources into vector embeddings, enabling efficient similarity search. With features like high throughput architecture, real-time synchronization, and extensive data connectors, Neum AI offers scalable solutions for RAG without the complexity of integrating disparate services, catering to dynamic data management needs.
Logo of chatarena
chatarena
ChatArena offers a flexible, user-friendly library for developing and researching autonomous LLM agents through multi-agent language games. It supports Markov Decision Processes and features intuitive Web UI and CLI for easy interaction. Comprehensive documentation facilitates custom environment creation, making it a valuable tool for researchers and developers to test and train models.
Logo of DB-GPT-Hub
DB-GPT-Hub
This project utilizes Large Language Models (LLMs) to improve Text-to-SQL parsing efficiency and accuracy. It achieves notable accuracy improvements in complex database queries by employing Supervised Fine-Tuning (SFT) and using datasets like Spider. With support for various models, including CodeLlama and Baichuan2, it minimizes hardware demands through QLoRA. A valuable resource for developers, the initiative offers comprehensive instructions for data preprocessing, model training, prediction, and evaluation.
Logo of LLMAgentPapers
LLMAgentPapers
Explore a curated list of essential papers on advancements in Large Language Model (LLM) agents, detailing insights into interactive NLP, autonomous agents, and multimodal interactions. The collection includes comprehensive surveys on LLM planning, personality derivation, and memory capabilities, with applications in various domains. Regularly updated with significant works like KnowAgent for knowledge-augmented planning, this resource offers a balanced look at the developments and potential directions of AI-driven language models.
Logo of aws-genai-llm-chatbot
aws-genai-llm-chatbot
Utilize AWS CDK to deploy chatbots with a wide range of Large and Multimodal Language Models. This project supports experimenting with different models and configurations, facilitating easy integration with services such as Amazon Bedrock, Amazon SageMaker, and third-party providers like OpenAI and Anthropic. The framework offers a comprehensive solution for building chatbots suited for diverse applications, enhancing interaction capabilities across platforms. Explore secure messaging and AI-powered document processing through additional resources. An open-source library helps developers design AI solutions using pattern-based architecture.
Logo of chameleon-llm
chameleon-llm
Chameleon is a framework that enhances large language models, such as GPT-4, through a plug-and-play composition of various tools like vision models and web search engines. It shows improved accuracy on ScienceQA and TabMWP tasks, delivering consistent and rational tool use. Explore Chameleon's modular design for its effectiveness in scientific and math domains, ensuring objective comparison.
Logo of LMOps
LMOps
LMOps is a research initiative dedicated to developing AI solutions with foundation models, focusing on enhancing capabilities through Large Language Models (LLMs) and Generative AI. It explores areas such as automatic prompt optimization, extensible prompts, structured prompting, and LLM inference acceleration. The project addresses technologies like context demonstration selection and instruction tuning, improving the functionality and customizability of AI solutions across domains. By providing insights into in-context learning, LMOps significantly contributes to the advancement of AI technologies.
Logo of LLocalSearch
LLocalSearch
Consider a locally-operated system that respects privacy by running Large Language Models efficiently. This tool is designed to be cost-effective, working seamlessly with low-end hardware, and includes comprehensive live logs for deeper insights. It features a user-friendly design, adaptable light and dark modes, and follow-up question support, ensuring privacy without API keys. The interface is straightforward and adaptable, inspired by Obsidian, with upcoming enhancements such as LLama3 support, user accounts, and persistent memory features. Perfect for individuals searching for unbiased, alternative search tools.
Logo of obsidian-Smart2Brain
obsidian-Smart2Brain
Explore a unique open-source Obsidian plugin designed to streamline personal knowledge management by incorporating large language models like ChatGPT and Llama2. The tool enables direct interaction with notes, featuring capabilities such as note-based chatting, reference linking, and conversation archiving. Maintains data privacy and security by functioning offline. Offers a range of LLMs, facilitating local model integration with Ollama for enhanced flexibility and control. Delve into advanced features including various chat interfaces and note retrieval based on content similarity. Keep up-to-date with ongoing development and engage with evolving AI functionalities. Benefit from efficient note management through an AI-centric approach that balances simplicity with depth.
Logo of LLMDataHub
LLMDataHub
LLMDataHub provides a curated collection of datasets for training large language models, enabling advancements in chatbot dialogue, response generation, and language comprehension. This includes datasets across domain-specific, alignment, pretraining, and multimodal categories, with detailed metadata on size, language, and usage. Supporting open-source projects, it facilitates small entities and individuals in accessing necessary corpora for competitive model training. Contributors are welcome to enhance this growing dataset resource.
Logo of jailbreak_llms
jailbreak_llms
This study presents a comprehensive analysis of the largest in-the-wild jailbreak prompt collection from December 2022 to December 2023. Utilizing the JailbreakHub framework, it compiles data from platforms such as Reddit and Discord, focusing on the risks and behaviors linked to harmful language prompts. The dataset, containing over 15,000 prompts with 1,405 identified as jailbreak prompts, provides crucial insights into the vulnerabilities and possible safeguards of large language models.
Logo of llm-guard
llm-guard
LLM Guard is a security toolkit designed to protect large language model interactions by detecting harmful language, preventing data leaks, and resisting prompt injections. It offers seamless integration into production settings and provides regularly updated features. A variety of input and output scanners enhance its versatility in safeguarding LLM systems.
Logo of tree-of-thought-llm
tree-of-thought-llm
Explore 'Tree of Thoughts', an innovative method for problem-solving with large language models. Easily integrate using OpenAI's API, with instructions for both PyPI and source installation. Start by tackling tasks like the game of 24 using GPT-4 and conduct experiments across different domains. Contribute to the community by offering feedback and find detailed descriptions and methods in the accompanying paper for deeper insights.
Logo of serve
serve
TorchServe provides mandatory token authorization and defaults to disabling model API control, enhancing security for PyTorch models in production environments. It is designed for flexible deployment across multiple platforms, such as CPUs, GPUs, AWS, and Google Cloud. TorchServe supports complex workflow deployments and advanced model management with various optimizations, offering high-performance AI solutions.
Logo of IncarnaMind
IncarnaMind
IncarnaMind utilizes Large Language Models (LLMs) like GPT, Llama2, and Claude to facilitate interaction with personal documents such as PDFs and TXTs. It features a Sliding Window Chunking mechanism and Ensemble Retriever for enhanced querying. The platform supports multi-document conversational QA, optimizing retrieval from various formats. Recent updates include LLM quantization and model compatibility, addressing the limitations of fixed chunking and single-document queries for a stable interaction.
Logo of ai-notes
ai-notes
The repository compiles extensive notes on advanced AI, emphasizing generative technologies and large language models. Serving as foundational material for the LSpace newsletter, it covers topics like text generation, AI infrastructure, audio and code generation, and image synthesis. Featuring tools such as GPT-4, ChatGPT, and Stable Diffusion, the notes detail contemporary developments, aiding AI enthusiasts and professionals in keeping updated with AI innovation and application.
Logo of KwaiAgents
KwaiAgents
KwaiAgents, developed by Kuaishou Technology, is an open-source initiative featuring the KAgentSys-Lite system and KAgentLMs models. It provides tools for agent planning, reflection, and tool usage. Key datasets include KAgentInstruct, consisting of over 200k agent-related instructions, and KAgentBench, offering comprehensive evaluation data. This project supports the development and testing of AI agent systems.
Logo of lida
lida
Using grammar-agnostic large language models, the LIDA tool executes code to deliver accurate visualizations across different programming environments. It facilitates data summarization, goal-setting, visualization creation and editing, and infographic generation while maintaining security. Compatible with numerous LLM providers, it enhances data representation effectively.
Logo of CSGHub
CSGHub
CSGHub is an open-source platform for managing large language model (LLM) assets, offering efficient operations through unified management of datasets, spaces, and codes. With seamless integration into existing systems via standardized OpenAPIs, assets can be managed using a web interface, Git commands, or CSGHub SDK. The platform features an integrated asset management assistant, on-premises deployment options, and comprehensive security controls, catering to enterprise requirements. Additionally, it offers smart data processing pipelines, high availability, and disaster recovery solutions, positioning it as a private, user-friendly alternative to other solutions.
Logo of LLMSurvey
LLMSurvey
This article provides a detailed overview of resources related to Large Language Models, featuring organized academic papers, insights from a Chinese language book for beginners, and analysis of trends in research outputs post-ChatGPT launch. It includes evolutionary insights on GPT-series and LLaMA models, alongside practical prompt design resources and experiments on instruction tuning. Contributors to the research can access updates through the provided links, supporting collaborative progress in LLMs.
Logo of awesome-llm-json
awesome-llm-json
Discover resources for using Large Language Models (LLMs) in generating JSON and other structured outputs, featuring a comprehensive guide on hosted and local models, Python libraries, and tutorials. Understand the advantages of structured generation for enhanced performance and explore tools and models from OpenAI and Google, among others, for integrating LLMs with external systems. Benefit from guided generation techniques using LangChain, Pydantic for effective data extraction and output structuring.
Logo of lorax
lorax
LoRAX is a cost-effective framework for serving fine-tuned large language models efficiently on a single GPU, maintaining high throughput and low latency. It enables dynamic adapter loading and merging from various sources such as HuggingFace and Predibase, ensuring seamless concurrent processing. With support for heterogeneous batching, optimized inference, and ready-for-production tools like Docker images and Prometheus metrics, LoRAX is well-suited for diverse deployment scenarios. This platform supports models like Llama and Mistral and is free for commercial use under the Apache 2.0 License.
Logo of spacy-llm
spacy-llm
Explore a streamlined integration of Large Language Models (LLMs) with spaCy to achieve versatile and rapid NLP prototyping. This solution facilitates the transformation of unstructured data into reliable NLP outputs without the necessity of training data. Notable features include named entity recognition, text classification, and sentiment analysis through open-source LLMs and models from leading platforms like OpenAI, Google, and Microsoft. This integration allows leveraging the complementary strengths of LLMs and spaCy for effective language processing, maintaining reliability and accuracy.
Logo of graph-of-thoughts
graph-of-thoughts
Graph of Thoughts offers a flexible framework for tackling complex problems using large language models. Users can create custom Graphs of Operations for intuitive resolution or emulate conventional techniques like CoT and ToT. The framework seamlessly integrates with large language models to boost computational power. Comprehensive documentation and examples facilitate usage, catering to developers and general users alike. Installation is simple via PyPI or from the source for those interested in modification.
Logo of llm
llm
This Python library and CLI tool facilitates interaction with Large Language Models via remote APIs or local installations. Key features include command-line prompts, SQLite storage, and embedding generation, with support for model expansion through plugins. Comprehensive documentation ensures streamlined usage and integration.
Logo of ludwig
ludwig
Ludwig provides a low-code environment to create tailored AI models such as LLMs and neural networks with ease. The framework uses a declarative YAML-based configuration, supporting features like multi-task and multi-modality learning. Designed for scalable efficiency, it includes tools like automatic batch size selection and distributed training options like DDP and DeepSpeed. With hyperparameter tuning and model explainability, users have detailed control, along with a modular and extensible structure for different model architectures. Ready for production, Ludwig integrates Docker, supports Kubernetes with Ray, and offers model exports to Torchscript and Triton.
Logo of Awesome-LLM-Reasoning
Awesome-LLM-Reasoning
This resource provides a curated collection of research papers and tools aimed at examining and enhancing the reasoning abilities of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs). It covers recent surveys, analysis, and methodologies regarding LLM reasoning, addressing aspects like mathematical reasoning and emergent phenomena. The repository includes significant studies on internal consistency, token bias, and multi-hop reasoning, offering code samples and theoretical insights. Researchers and developers can explore techniques such as Chain-of-Thought reasoning and assess how LLMs develop new research concepts, thus advancing AI's logical thinking frontiers.
Logo of code-review-gpt
code-review-gpt
Utilize Large Language Models to improve code quality in CI/CD pipelines by detecting issues like exposed secrets and inefficient code. Supports both local execution and pipeline integration for ongoing feedback. As an alpha release, it invites experimentation with careful review of AI suggestions. Engage with a vibrant developer community and contribute to a roadmap featuring new tools such as a PR discussion chatbot.
Logo of higgsfield
higgsfield
Higgsfield is an open-source framework designed for scalable machine learning, enabling efficient management of GPU resources for training models with large-scale parameters. It supports seamless training processes through ZeRO-3 DeepSpeed API and fully sharded data parallel API of PyTorch, with integration for continuous development via GitHub. Higgsfield addresses environmental and configuration challenges, providing developers with a flexible toolset for scalable experiments on cloud platforms such as Azure, LambdaLabs, and FluidStack, ensuring reliable performance.
Logo of agency
agency
Agency is a Go-based library designed for generative AI and LLMs, focusing on simplicity and efficient coding practices. This lightweight tool enables developers to create autonomous AI systems, offering features such as OpenAI API integration and a flexible framework for custom operations. Agency addresses the challenges associated with Python-to-Go translations, leveraging Go's capabilities in static typing and performance, thus providing a native solution for AI applications without unnecessary complexity.
Logo of opencompass
opencompass
OpenCompass is a comprehensive platform for assessing large language models, featuring advanced algorithms and a user-friendly interface. It supports 20+ HuggingFace and API models, evaluating over 70 datasets with about 400,000 questions. The platform is proficient in distributed evaluations, providing billion-scale assessments within hours, and supports various paradigms including zero-shot and few-shot learning. OpenCompass is modular and easily extendable, accommodating new models and datasets. It also allows for API and accelerated evaluations with different backends, contributing to a fair, open, and reproducible benchmarking ecosystem with its tools like CompassKit, CompassHub, and CompassRank.
Logo of llm_interview_note
llm_interview_note
Explore a curated collection of large language model concepts and interview questions, particularly suited for resource-constrained scenarios. Discover 'tiny-llm-zh', a compact Chinese language model, alongside projects including llama and RAG systems for practical AI learning. Engage with resources on deep learning, machine learning, and recommendation systems.
Logo of nlux
nlux
Discover an open-source library for efficient development of conversational AI interfaces using React and JavaScript. NLUX allows seamless AI integration through frameworks like Next.js and versatile React components, offering a zero-dependency core ideal for any web platform. It optimizes developer experience with focus on performance and accessibility, supporting tailored large language model interactions with tools such as ChatGPT and Hugging Face. Engage with the community, explore comprehensive documentation, and contribute to its development.
Logo of ml-engineering
ml-engineering
Discover a comprehensive collection of methodologies, tools, and step-by-step guides designed for training large language and multi-modal models, aimed at engineers and ML operators. This resource includes practical scripts and commands and draws from the author's experience with models such as BLOOM-176B and IDEFICS-80B, as well as ongoing work with RAG models at Contextual.AI. It covers crucial aspects from hardware configurations to debugging and orchestration, supporting efficient development and inference of advanced machine learning models. Follow updates and enhancements through social media.
Logo of semantic-kernel
semantic-kernel
Semantic Kernel seamlessly integrates LLMs like OpenAI with programming languages such as C#. It simplifies AI plugin orchestration, enabling efficient and automated user goal fulfillment. Enterprises benefit from robust security features and adaptability. Designed for future-proofing, it supports new AI models with minimal code changes and is available for Python, Java, and more.
Logo of NeMo
NeMo
NVIDIA's NeMo Framework facilitates the creation of large language models and speech recognition systems with modular design and ease of use. It offers updated support for Llama 3.1 LLMs and enhances AI training with compatibility for Amazon EKS and Google Kubernetes Engine, while delivering significant improvements in ASR model inference speeds and multilingual capabilities through the Canary model.
Logo of transferlearning
transferlearning
Delve into an extensive collection of transfer learning resources, featuring academic papers, tutorials, and code implementations. Stay informed about cutting-edge developments and foundational theories in domain adaptation, deep transfer learning, and multi-task learning. Access various educational materials, including video tutorials, profiles of eminent scholars, and prominent research papers, offering essential insights for both newcomers and experienced researchers.
Logo of aigc
aigc
This article explores the role of large language models in software development, illustrating their real-world applications. Key focus areas include prompt engineering, AI-based architecture, and comprehensive context management. Collaboratively created with Thoughtworks and community experts, these projects leverage tools like ChatGPT and Copilot to improve software development efficiency. Featuring open-source resources, tutorials, and a detailed eBook, this content provides practical insights for developers interested in LLM-driven process optimization and invites participation through community-driven contributions.
Logo of start-llms
start-llms
This guide provides essential resources to learn Large Language Models (LLMs) without needing an advanced background. Stay informed with the latest updates, techniques, and innovations in 2024 while accessing free resources like tutorials, courses, and community forums. Develop skills in areas such as Transformers and NLP through practical exercises and clear explanations. Suitable for all learning styles, the guide enables learners to become proficient in LLMs independently.
Logo of Awesome-Text2SQL
Awesome-Text2SQL
Access a wide-ranging collection of tutorials and resources dedicated to Large Language Models, Text2SQL, Text2API, and other related areas. The platform provides detailed insights into various community contributions, current leaderboard standings, and recent surveys about LLM-enhanced Text2SQL systems. This resource guide benefits both novice and experienced users aiming to deepen their knowledge and practical use of Text2SQL.
Logo of Awesome-LLM-RAG
Awesome-LLM-RAG
This repository is a curated collection of academic papers focusing on advanced Retrieval Augmented Generation in large language models. It includes state-of-the-art methods and topics such as RAG instruction tuning, in-context learning, and embeddings. Suitable for researchers and developers seeking resources and updates that advance generative language models. Engage with available workshops, tutorials, and contribute via pull requests to be part of the community.
Logo of rag-demystified
rag-demystified
Explore the workings of advanced RAG pipelines using LLMs for effective question-answering. Learn how frameworks such as LlamaIndex and Haystack enable transparency while streamlining operations. Address components like the Sub-question Query Engine and evaluate challenges in question sensitivity and cost. Gain insights into developing RAG pipelines that provide current information and reliable source tracking.
Logo of magentic
magentic
Easily incorporate Large Language Models into Python projects using the `magentic` library. Utilize decorators for structured outputs and prompt examples, and benefit from async function calls. Integrate complex logic with LLMs, supported by providers such as OpenAI and Anthropic. Explore features like streaming, vision integration, and retry mechanisms, all configured easily for adaptability. Leverage advanced annotations and operations for efficient LLM-powered functions.
Logo of Play-with-LLMs
Play-with-LLMs
Explore effective methods for training and evaluating large language models through practical examples involving RAG, Agent, and Chain applications. Understand the use of Mistral-8x7b and Llama3-8b models with techniques such as CoT and ReAct Agents, transformers, and adaptations for specific languages like Chinese. The article offers comprehensive insights into pretraining, fine-tuning, and RLHF processes, supported by practical case studies. Ideal for those interested in model quantization and deployment on platforms such as Huggingface.
Logo of EasyEdit
EasyEdit
EasyEdit is a comprehensive framework for refining large language models, enabling precise knowledge updates while preserving model integrity. It integrates advanced methods like AlphaEdit, DeepEdit, and InstructEdit, along with constrained decoding techniques to minimize hallucination. The platform supports editing tasks such as factual, safety, and personality modifications for targeted improvements. Utilizing cutting-edge solutions like EMMET and PMET, it facilitates efficient knowledge insertion, updating, and deletion. EasyEdit's robust evaluation metrics ensure effective and accurate model enhancements, vital for maintaining updated and proficient language models in diverse fields.
Logo of RSPapers
RSPapers
Explore a curated collection of crucial papers and tutorials on Recommender Systems, covering areas such as deep learning, social and industrial applications, privacy, and large language models. The repository updates regularly, offering insights into advanced research and addressing issues such as cold start and click-through rate prediction, making it a valuable resource for academics and industry professionals.