#Retrieval-Augmented Generation
Awesome-LLM-RAG
This repository is a curated collection of academic papers focusing on advanced Retrieval Augmented Generation in large language models. It includes state-of-the-art methods and topics such as RAG instruction tuning, in-context learning, and embeddings. Suitable for researchers and developers seeking resources and updates that advance generative language models. Engage with available workshops, tutorials, and contribute via pull requests to be part of the community.
local-rag-example
Discover a collection of practical examples utilizing local large language models in the Local Assistant Examples repository. Originating from a guide on locally implementing Retrieval-Augmented Generation (RAG) models with Langchain, Ollama, and Streamlit, this repository has evolved to include a wider range of educational materials. Each example, complete with its own README, is structured to facilitate clear understanding and implementation. This project is designed for educational use, providing simplified insights into LLM applications away from production intricacies. Stay informed as new examples become available.
synthesizer
Discover a comprehensive framework that facilitates custom dataset creation and retrieval-augmented generation (RAG) using top AI models such as Anthropic and OpenAI. Integrate seamlessly with the Agent Search API to efficiently set up workflows and create synthetic Q&A pairs or assess RAG pipeline performance. Engage with the community on Discord for insights, or delve into detailed documentation to refine LLM training and deployment processes.
rag-demystified
Explore the workings of advanced RAG pipelines using LLMs for effective question-answering. Learn how frameworks such as LlamaIndex and Haystack enable transparency while streamlining operations. Address components like the Sub-question Query Engine and evaluate challenges in question sensitivity and cost. Gain insights into developing RAG pipelines that provide current information and reliable source tracking.
serverless-chat-langchainjs
This project showcases creating a serverless AI chat experience utilizing LangChain.js and Azure's serverless technologies. Hosted on Azure Static Web Apps and Functions, it uses Azure Cosmos DB for data storage, providing a scalable and cost-effective solution. Perfect for AI chat application development, it combines AI Retrieval-Augmented Generation for relevant responses backed by enterprise-level documents. Whether deploying on Azure or testing locally with Ollama, it offers flexibility and efficiency. Customize it to meet specific business needs and enhance user interaction with informative chatbots.
RAG-Survey
This survey explores Retrieval-Augmented Generation (RAG), covering diverse techniques like query and data augmentation, with ongoing updates reflecting the field's growth. It offers insights into advanced applications in content generation and knowledge base enhancement.
R2R
Explore a versatile platform for building scalable Retrieval-Augmented Generation (RAG) applications, featuring multimodal ingestion and hybrid search capabilities. With a robust containerized RESTful API, comprehensive app management, and observability features, R2R simplifies the deployment of advanced RAG solutions. Key functionalities include automatic relationship extraction and knowledge graph construction, enhanced by the latest updates such as Hatchet orchestration and Unstructured.io for improved ingestion. Refer to the documentation for in-depth insights into streamlined RAG development.
pykoi-rlhf-finetuned-transformers
Pykoi is an open-source Python library that facilitates the optimization of large language models utilizing Reinforcement Learning with Human Feedback (RLHF). It features a unified interface for collecting user feedback in real-time, finetuning, and comparing different models. Key functionalities include a UI for chat history storage, tools for efficient model performance comparison, and RAG chatbot integration. Compatible with CPU and GPU environments, Pykoi supports models from OpenAI, Amazon Bedrock, and Huggingface, aiding in fine-tuning models with custom datasets for improved precision and relevance.
n-levels-of-rag
This guide provides insights into Retrieval-Augmented Generation (RAG) applications, covering basic to advanced techniques like file traversal and async processing. It is designed to aid both beginners and experienced developers in optimizing system performance and understanding core functionalities.
RAG-Survey
Retrieval-Augmented Generation (RAG) is enhancing AI-generated content by integrating retrieval methods with generative AI. This comprehensive survey categorizes research on RAG's foundations and methods, including query-based, latent representation-based, logit-based, and speculative approaches. It highlights advancements in input, retrieval, and generation enhancements and covers applications like open-domain question answering and video captioning. Keep updated with this extensive review on how RAG influences AI content development.
GoMate
This RAG framework offers an enhanced modular structure ensuring reliable inputs and trusted outputs for high-quality retrieval-augmented generation. It allows flexible adjustments to cater to diverse application needs, featuring latest updates such as multi-file type support, improved DenseRetriever, and new functionalities like ReRank and Judge. Users can choose between pip and source installation, aided by accessible quick start guides.
Feedback Email: [email protected]