LLM_Web_search
This project enables local LLMs to perform web searches using a command-based approach. It identifies a specific command in the model's output through regular expressions, utilizing the duckduckgo-search tool to gather relevant results. By combining dense embedding models with Okapi BM25 or SPLADE, relevant information is extracted and appended to the model's output. The integration of multiple search backends and keyword retrieval methods enhances flexibility in extracting relevant content. The system is user-friendly, supporting custom regular expressions and various chunking methods, and is recommended for VRAM-limited environments with models like Llama-3.1-8B-instruct.