#FastAPI

Logo of llm-graph-builder
llm-graph-builder
The application efficiently transforms various formats of unstructured data into a structured Neo4j knowledge graph. It utilizes Large Language Models such as OpenAI and Gemini to extract nodes, relationships, and properties via the Langchain framework. Files can be uploaded from local devices, Google Cloud Storage (GCS), or Amazon S3, with the flexibility to select a preferred LLM model. Key features include supporting custom schemas, interactive graph visualization in Bloom, and conducting data interactions using conversational queries within Neo4j. Ensure a Neo4j Database V5.15+ with APOC support. Available for deployment via Docker locally or Google Cloud Platform.
Logo of langcorn
langcorn
LangCorn facilitates the deployment of LangChain models via FastAPI, ensuring high performance and scalability for language applications. It allows for easy setup of custom pipelines and includes built-in authentication and asynchronous processing for enhanced response times. With detailed RESTful API documentation and adaptable FastAPI server settings, it serves as a robust solution for language model deployment.
Logo of langchain-serve
langchain-serve
Langchain-serve allows the deployment of LangChain applications on Jina AI Cloud for scalable and serverless solutions while maintaining ease of local development. It supports the creation of REST/Websocket APIs and integrates components like AutoGPT, Babyagi, pandas-ai on either cloud or personal infrastructure, securing data privacy. It offers enhanced deployment simplicity using one-command operations, FastAPI integrations, and provides Swagger UI for API documentation, suitable for straightforward AI app deployment with minimal infrastructure hassle.
Logo of search-result-scraper-markdown
search-result-scraper-markdown
The project provides a web scraping tool that transforms search results into Markdown format through FastAPI, SearXNG, and Browserless. It features proxy support for anonymity and utilizes AI for precise search result filtering. Designed for developers, it simplifies converting HTML to Markdown and retrieving web and YouTube data while securely leveraging proxies. Comparable tools include Jina.ai and alternative solutions, each offering unique web scraping and search functionalities.
Logo of open-text-embeddings
open-text-embeddings
This open-source project offers an OpenAI-compatible endpoint for text embeddings, supporting models such as BAAI/bge-large-en and intfloat/e5-large-v2. It allows flexible input handling and supports deployment on local and cloud platforms like AWS and Modal, with GPU optimization for enhanced performance. Ideal for developers, it provides robust text embeddings suitable for various applications.
Logo of instagraph-nextjs-fastapi
instagraph-nextjs-fastapi
The project combines modern frontend tools such as Next JS and Tailwind CSS with robust backend support via FastAPI, designed for developers focused on building efficient AI products. By utilizing server-sent endpoints and React Flow, it streamlines the development process. Review installation steps, from environment setup to local server execution, and experience the web interface. Suitable for developers with a Python background looking to adopt advanced technologies for swift deployment.
Logo of fastapi-tips
fastapi-tips
Explore crucial FastAPI tips that focus on improving application performance. Understand how to install tools such as uvloop and httptools to enhance processing speeds and the impact of async functions to prevent performance lags. Learn about utilizing HTTPX's AsyncClient for asynchronous requests and managing application lifecycles using Lifespan State. The guide also covers implementing solid middleware designs and debugging strategies to handle blocking event loops. These tips aim to boost the efficiency and scalability of FastAPI applications.
Logo of voltaML-fast-stable-diffusion
voltaML-fast-stable-diffusion
Experience the speed and efficiency of Stable Diffusion with easy Docker setup, supporting PyTorch and AITemplate across Windows and Linux. Engage with a clean WebUI, robust API, and a thriving open-source community, all under GPL v3 license. Join our collaborative space on Discord.
Logo of autollm
autollm
AutoLLM provides a unified API for deploying applications with over 100 language models, allowing for straightforward cost calculations and installations. It supports over 20 vector databases and simplifies FastAPI app creation with single-line code implementation. The platform includes productivity tools such as a 1-line RAG LLM engine and automated cost calculations, supporting efficient project management. Resources including video tutorials, blog posts, and comprehensive documentation ease the transition from platforms like LlamaIndex, making it a top choice for developers looking for efficient and scalable solutions.
Logo of infinity
infinity
Explore a REST API crafted for high-throughput, low-latency text embedding services. Easily deploy models from HuggingFace using fast inference backends such as Torch, Optimum, and CTranslate2. Infinity enables multi-modal orchestration, allowing diverse model deployment and management. Built with FastAPI, it complies with OpenAI's API specifications, ensuring straightforward implementation. Benefit from dynamic batching and GPU acceleration with NVIDIA CUDA, AMD ROCM, etc. Learn integration with serverless platforms for optimal scalability and performance.
Logo of create-tsi
create-tsi
Develop AI applications effortlessly with the low-code toolkit, using LlamaIndex and leveraging Large Language Models on Open Telekom Cloud. Tailor bots and agents for specific tasks with ease. It features a Next.js front-end and a Python FastAPI backend for data-driven interaction. Utilize this tool to build chat interfaces or connect with diverse data sources. Begin quickly with a T-Systems API key and enhance the speed and adaptability of AI projects, conforming to high coding and licensing principles.
Logo of drqa
drqa
Develop a robust question answering system employing LangChain and large language models such as OpenAI's GPT3. The project includes a Python backend powered by FastAPI and a React frontend to transform PDFs into searchable text fragments, utilizing sentence embeddings for swift and economical processing. With capabilities for integrating vector databases, it adapts to multiple document formats. Planned improvements feature streaming responses, caching, enhanced UI, and support for diverse document types, rendering it a flexible framework for advanced question-answering applications.
Logo of Hello-Python
Hello-Python
Discover Python through practical courses from foundational principles to advanced web development techniques, featuring FastAPI and MongoDB. The series offers detailed video tutorials on both backend and frontend development, including ChatGPT integration. Perfect for beginners and intermediate learners, this resource helps build essential coding skills and apply them to practical projects. Become part of a thriving development community to further refine your expertise and unravel the full potential of Python.
Logo of swiss_army_llama
swiss_army_llama
Swiss Army Llama simplifies local LLM processing through FastAPI, providing REST endpoints for text embeddings, completions, and semantic analysis. The platform accommodates various document types and audio inputs, integrates OCR and transcription via Whisper model, and uses a Rust-based library for vector similarity with FAISS-supported search. Cached embeddings improve efficiency, RAM Disks speed up model loading, and multiple pooling methods offer adaptability. The setup is accessible via Swagger UI for easy application integration.
Logo of fastapi
fastapi
FastAPI is a modern Python web framework that enables high-performance API development, leveraging asynchronous coding for speed and error reduction. It offers an intuitive interface with editor support and auto-completion, streamlining development and minimizing debugging time. Fully compatible with OpenAPI and JSON Schema, FastAPI is equipped for production environments with automatic interactive documentation. Its use of standard Python type hints reduces code duplication and improves code robustness.
Logo of financial-agent
financial-agent
The financial agent, built on Langchain and FastAPI, provides robust financial data such as current and historical prices and latest news through the Polygon API. It supports calculations like owner earnings, return on equity, and discounted cash flow valuation. Deployment options include a secure Docker container or a Python-based local setup with Poetry. Designed for providing informational and entertaining financial insights, access requires OpenAI and Polygon API keys, offering valuable support for investment analysis.
Logo of AgentGPT
AgentGPT
AgentGPT allows configuration and deployment of custom autonomous AI agents in browsers, designed to plan, execute tasks, and learn to achieve any goal. Features include a CLI setup for environment variables, databases, and more, using technologies like Next.js and FastAPI. Supported by GitHub sponsorships for enhanced functionality.
Logo of langchain-extract
langchain-extract
LangChain Extract provides an effective solution for extracting text and file data using FastAPI, LangChain, and PostgreSQL. It features a FastAPI web server that supports the creation of customizable extractors through JSON Schema. With integration into LangChain, it enhances data processing capabilities, suitable for various data extraction scenarios. The REST API and OpenAPI Documentation facilitate ease of access, while the demo service and continuous development highlight its viability for creating custom applications.
Logo of agentkit
agentkit
AgentKit, built on LangChain, offers a complete solution for developing scalable, chat-based Agent apps. It simplifies full stack development through modularity in FastAPI/Nextjs and features like data streaming and a reliable routing system. Designed for efficiency, it supports rapid prototyping and stable production deployments, emphasizing configurability and user feedback for tailored applications.
Logo of cookiecutter-fastapi
cookiecutter-fastapi
Utilize Cookiecutter CLI for easy generation of FastAPI project templates, eliminating the need to fork repositories. This tool employs Jinja2 for flexible customization of file names and content, providing a personalized setup process. Simply install with 'pip install cookiecutter' and run 'cookiecutter gh:arthurhenrique/cookiecutter-fastapi' to initiate your FastAPI projects. Enjoy an efficient and straightforward project setup experience.
Logo of semantic-search-app-template
semantic-search-app-template
This comprehensive tutorial offers a template for constructing a semantic search application powered by the Atlas Embedding Database and FastAPI. It integrates optional tools such as the OpenAI Embedding API, facilitating the upload and indexing of content for precise semantic search results, which are further verified by a visual debugger. The guide covers Docker deployment, API key integration, and includes opportunities for project expansion by contributing to a React front end, making it a valuable resource for developers aiming to optimize search functionalities with cutting-edge embeddings.
Logo of rag-postgres-openai-python
rag-postgres-openai-python
Explore a web-based chat application that integrates OpenAI models with PostgreSQL, optimized for Azure. Built with React on the frontend and Python with FastAPI on the backend, the project offers advanced question-answering and vector-based search via pgvector. It features ease of deployment on Azure Container Apps and Azure PostgreSQL Flexible Server, utilizes hybrid search functionality, and employs OpenAI function calls for query optimization. Suitable for developers focusing on cloud resource utilization, improving database queries, and streamlined deployment using Azure Developer CLI.
Logo of quillman
quillman
Engage in uninterrupted voice interaction with an advanced speech-to-speech language model using bidirectional streaming technology. The app, powered by Kyutai Lab's Moshi model and Mimi encoder/decoder, ensures fast responsiveness and low latency through Opus audio codec compression. Discover possibilities for building language model applications and experimenting with provided tools. Explore interactive demos and contribute to the open-source community.
Logo of LitServe
LitServe
LitServe boosts FastAPI's capabilities, enabling AI model serving at twice the speed through features such as batching, streaming, and GPU autoscaling. It supports diverse model types, including LLMs, PyTorch, JAX, and TensorFlow, offering both self-hosting and managed deployment options. Engineered for scalable and enterprise-grade performance, LitServe facilitates the construction of compound AI systems, integrating seamlessly with vLLM for comprehensive AI model management.
Logo of langserve
langserve
LangServe provides seamless deployment of runnables as REST APIs, integrated with FastAPI and Pydantic for precise data validation. It offers efficient endpoints for invoking, batching, and streaming with high concurrency. The platform includes auto-inferred schemas, detailed API documentation, and tracing to LangSmith. With JavaScript client support and a comprehensive SDK, LangServe facilitates robust API management and interactive testing, featuring various deployment examples to optimize development workflows.
Logo of rag-search
rag-search
RAG Search API by thinkany.ai offers an effective solution for sophisticated data search, leveraging OpenAI's GPT-3.5-turbo and text-embedding-ada-002 models with a Zilliz database. The setup involves setting environment variables, installing dependencies, and launching the FastAPI server. It supports complex query processing, including reranking and filtering features to refine search results according to adjustable parameters. This API is well-suited for developers seeking to enhance search efficiency.
Logo of LLMChat
LLMChat
Explore a comprehensive chat platform powered by FastAPI and Flutter, designed for seamless integration with ChatGPT and local LLM models. The system supports real-time communication, Duckduckgo web browsing, and vector embedding for enhanced contextual responses. It offers model customization and is prepared for future GPT-4 features, focusing on data efficiency, automatic summarization, and security. Users can enjoy a user-friendly interface, customizable widgets, and optimized concurrency.
Logo of smolex
smolex
Smolex enhances ChatGPT capabilities by efficiently retrieving code entities from a codebase, aiding in writing tests, updating code, and suggesting improvements. It leverages ASTs stored in a SQLite database for swift access and currently supports Python. With the potential for expansion to other languages, Smolex integrates seamlessly to improve coding interactions in ChatGPT.
Logo of lego-ai-parser
lego-ai-parser
This open-source application utilizes FastAPI and OpenAI to parse visible text from HTML elements accurately. It supports multiple languages and server setup for integration. The parsers can handle data from Google Local, Amazon, Etsy, Wayfair, BestBuy, Costco, Macy's, and Nordstrom, ensuring reliable text extraction and classification. With the option to design custom parsers, it's suitable for targeting specific data needs across online platforms. It ensures error-free function with adaptive token management and secure API calls for better security and efficiency.
Logo of tldrstory
tldrstory
tldrstory provides a semantic search platform using zero-shot labeling for dynamic content categorization and text similarity searches, equipped with a Streamlit interface and FastAPI backend for data analysis. Installable via pip or GitHub, it supports RSS, Reddit, and custom data sources for diverse application setups such as 'Sports News'. Ideal for managing large volumes of story text.
Logo of InsightFace-REST
InsightFace-REST
This repository delivers a REST API designed for face detection and recognition powered by FastAPI and NVIDIA TensorRT. It supports fast deployment on NVIDIA GPU systems through Docker, optimizing inference speed and allowing model conversion to ONNX and TensorRT formats. Highlights include automatic model retrieval, compatibility with multiple detection models including SCRFD and RetinaFace, and recognition models like ArcFace and PyTorch. Supports batch inference for both detection and recognition, making it suitable for high-performance face analysis in various applications.
Logo of Fooocus-API
Fooocus-API
The platform provides a FastAPI-based REST interface for the open-source Fooocus image generation software. It simplifies image manipulation by eliminating manual tweaking, inspired by techniques from Midjourney and Stable Diffusion. Comprehensive documentation and example code are available, allowing integration with multiple programming languages. This solution is optimized for deployment via Docker and self-hosting, supporting extensive configurations and customization for image processing. Users can initiate processes using Conda, Venv, or Docker, and adjust image enhancement through various parameters. The platform enhances image quality with automated optimizations for improved user experience.