#Docker
chat-with-gpt
Chat with GPT offers a flexible chat interface, leveraging the ChatGPT API and ElevenLabs, enabling fast responses, realistic text-to-speech, and speech recognition. Users can modify the AI's System Prompt, set creativity levels, and benefit from complete markdown support. Features include easy session sharing and editing, with payment options based on usage for the API. OpenAI and ElevenLabs API keys are required to start, with all keys stored securely on the user's device.
text-generation-inference
Text Generation Inference facilitates the efficient deployment of Large Language Models like Llama and GPT-NeoX. It enhances performance with features such as Tensor Parallelism and token streaming, supporting hardware from Nvidia to Google TPU. Key optimizations include Flash Attention and quantization. It also supports customization options and distributed tracing for robust production use.
pandas-ai
PandasAI allows effortless data interaction with natural language, serving both technical and non-technical audiences. This Python-based tool streamlines data analysis through intuitive queries and visualization. It integrates seamlessly with Jupyter notebooks, Streamlit apps, and REST APIs via FastAPI or Flask. Docker support ensures straightforward client-server setup and provides options for managed cloud or self-hosted solutions. Utilizing advanced language models, PandasAI translates complex queries into actionable insights while maintaining privacy and extensive documentation, emphasizing security and ease of use.
bingo
Bingo provides a seamless interface resembling New Bing's core features, broadly compatible and deployable within domestic networks. It facilitates continuous conversations, multi-platform access, and straightforward Docker deployment. With support for voice interactions and OpenAI integration, it operates without the need for user accounts, fostering an effortless and cost-free experience. The platform allows independent hosting and includes enhancements like image generation and dark mode. Expansion plans incorporate internationalization, enhancing its global reach and versatility.
copilot
OpenCopilot provides SaaS applications with a customizable AI assistant that integrates smoothly with existing APIs. It uses advanced language models to decide if API endpoints are needed for user requests, choosing the appropriate endpoints for optimal task execution. It supports bulk API imports via Swagger OpenAPI 3.0 and offers a simple chat integration. OpenCopilot allows for easy implementation and management of client requests with minimal coding effort. Explore the expanding role of AI copilots with this versatile platform.
core
Cheshire Cat is a framework designed to facilitate the creation of bespoke AI solutions across different language models, akin to how WordPress or Django serve web developers. It encompasses features like an API-first architecture for seamless conversational layer integration, memory retention for contextual conversations, and plugin extensibility. Compatible with Docker and an intuitive admin interface, it supports major language models such as OpenAI and Google, making it a robust choice for AI development. Discover its capabilities and connect with a vibrant community for collaboration and growth.
QChatGPT
This platform offers comprehensive resources including documentation and deployment guides, with a robust plugin system for easy integration. It supports multiple Python versions and encourages community contribution through a structured plugin submission process, providing an adaptable chat interface for varied communication requirements.
cloudflare-ai-web
Explore a lightweight solution to establish a multi-modal AI platform using Cloudflare Workers AI. Benefit from serverless deployment options, with features like access passwords and local chat history storage. Compatible with AI models including ChatGPT, Gemini Pro, and Stable Diffusion, and supports Docker and Deno Deploy integrations. Gain insights on required environment variables and deployment setups, ideal for hassle-free AI technology adoption.
IoA
Internet of Agents is an open-source framework that enables diverse AI agents to collaborate efficiently, taking inspiration from the structure of the internet. It supports autonomous team formation, integration of heterogeneous agents, and asynchronous task execution, ensuring scalability and extensibility across various applications. By promoting adaptive conversation flow and multitasking capabilities, the platform improves collaboration between AI agents, allowing them to tackle complex tasks collectively.
ML-Bench
This framework evaluates large language models and machine learning agents using repository-level code, featuring ML-LLM-Bench and ML-Agent-Bench. Key functionalities include environment setup scripts, data preparation tools, model fine-tuning recipes, and API calling guides. It supports assessing open-source models, aiding in training and testing dataset preparation, and offering a Docker environment for streamlined operation.
openai-api-proxy
This project offers an easy method to deploy an OpenAI API proxy using Docker and cloud functions, featuring SSE streaming and text moderation for efficient content handling. Supports NodeJS environments with simple domain/IP integration, and optional proxy key for added security. Ideal for developers integrating OpenAI without direct exposure, complemented by detailed Tencent Cloud deployment documentation.
chatgpt-telegram
This project offers a CLI tool that enables interaction with ChatGPT through a Telegram bot, supporting multiple platforms. Users can download executables for their OS and configure them using a Telegram token, with detailed instructions provided for both manual and Docker setups. Authentication options include browser sign-in or manual cookie extraction, offering adaptability for different user environments. The tool's easy installation and multi-platform compatibility ensure usability, catering to a wide range of technical skills.
unstructured
The project provides open-source components for ingesting and processing unstructured data types including PDFs, HTML, and Word documents. Its modular functions and connectors enable efficient data processing that supports diverse platform requirements, featuring a serverless API and multiple installation options for user adaptability.
LocalAI
Explore LocalAI, a free and open-source REST API providing compatibility with major AI specifications for on-premise deployment. Operates effectively on consumer hardware without requiring GPUs, suitable for running language models, and generating images and audio. Developed by Ettore Di Giacinto, it supports quick deployment via scripts or Docker, and offers a model gallery and community support on Discord. Keep informed about its advancements in decentralized inferencing and federated modes.
Flowise
Flowise provides a platform for creating custom LLM applications through an intuitive drag-and-drop interface. Compatible with NodeJS and Docker, it enables efficient development and deployment of language models. Intended for developers, it includes key features such as server APIs, UI components, and complete documentation for smooth operation. Supporting self-hosting or Flowise Cloud deployment, it works with multiple infrastructures like AWS, Azure, and Digital Ocean. Join the active community on Discord and contribute to this expanding project.
ollama
Ollama provides an easy-to-use framework for deploying and customizing large language models across macOS, Windows, and Linux. With a library featuring models like Llama and Phi, Ollama supports local execution and Docker deployment. Import models from GGUF, PyTorch, or Safetensors, and enjoy simple integration with prompt adjustments and model file configurations. The project also offers community-driven integrations and comprehensive documentation for APIs and command-line usage, making it perfect for developers who want to seamlessly harness the power of language models.
OpenDevin
OpenHands is a platform utilizing AI to mirror the tasks of human developers, including code modifications and command executions. It provides a simple Docker setup and supports various LLM providers, catering to both developers and researchers. As a community-driven initiative, OpenHands offers opportunities for code contribution, research participation, and feedback. Explore detailed documentation and engage with an active community focused on improving AI in software development.
AutoPR
AutoPR uses AI to facilitate codebase workflow automation through nested README summaries, issue tracking, and YAML-configured actions. It integrates with Git to offer features such as API call recording and PR summarization. Despite its inactive maintenance, AutoPR supports autonomous PR generation via simple configurations. Explore the documentation to understand how it organizes tasks, manages Git operations, and performs customized workflows.
one-api
One API offers straightforward access to numerous large AI models through a standard OpenAI API format. It supports leading models like OpenAI ChatGPT, Google PaLM2, and Baidu Wenxin, facilitating API requests management with load balancing and streaming capabilities. The platform provides flexible deployment options such as Docker and Cloudflare and allows extensive customization in user and token management. It includes features like channel and user group management, announcements, and automatic failure retries. As an open-source initiative, it ensures compliance with OpenAI terms and Chinese legal requirements.
photonix
Explore a cutting-edge photo management tool designed for home servers, utilizing web technologies for seamless photo organization from any device. With features like object recognition and color analysis, it offers intuitive use and community collaboration opportunities through platforms like GitHub and Docker Hub.
TaskingAI
TaskingAI is a versatile platform that supports LLM-based agent development and deployment, featuring integration with a wide range of AI models through unified APIs. Its BaaS-inspired framework separates AI logic from product development, enabling a seamless shift from prototyping to scalable production with RESTful APIs. Key features include one-click deployment, high-performance asynchronous computing, and an easy-to-use UI console for efficient project management. By supporting both stateful and stateless operations, TaskingAI facilitates AI agent customization and multi-tenant application deployment while ensuring smooth integration of tools and models.
basaran
Basaran serves as an open-source alternative to the OpenAI text completion API, enabling streaming with Hugging Face Transformers models. Key features include multi-GPU support, diverse decoding strategies, and easy compatibility with OpenAI's client libraries, allowing seamless service replacement with current open-source models. It provides real-time updates and a user-friendly web playground for developers seeking adaptable AI text generation solutions.
chatgpt-your-files
The workshop instructs on developing a secure chat interface for document interaction utilizing OpenAI’s GPT models and retrieval augmented generation (RAG). Features include interactive chat, third-party login, secure document management, and a dynamic REST API with row-level security. It includes detailed guidance for setting up a Supabase project, either locally or in the cloud, optimizing the frontend for file uploads, and is complemented by a comprehensive YouTube tutorial.
EvalAI
EvalAI is a platform designed to evaluate and compare ML and AI algorithms efficiently. It features a centralized leaderboard and customizable evaluation protocols, supporting remote evaluations for extensive AI challenges. The platform aids in research reproducibility with consistent datasets and evaluation metrics, utilizing map-reduce frameworks for fast evaluations. Notable features include command-line interface support, Docker-based environment assessments, and scalability with open-source technologies. EvalAI seeks to facilitate the organization, participation, and collaboration in AI challenges, supporting global AI benchmarking efforts.
ChainFury
ChainFury is an open-source engine empowering AI tools like Tune Chat and Tune Studio. It offers advanced capabilities such as Retrieval Augmented Generation (RAG), image generation, and secure data storage. The platform includes modular engine packages and a GUI server for easy deployment via Docker, allowing customization through environment variables. Extensive documentation and active community support on Discord assist in integration and development for production use.
Pytorch-UNet
This PyTorch-based U-Net implementation enhances high-definition image segmentation, particularly for challenges like Kaggle's Carvana Image Masking. Featuring Docker for straightforward deployment and mixed precision optimization, the model boasts a Dice coefficient of 0.988423 across vast test sets. The project supports diverse segmentation applications, such as medical and portrait, and offers seamless training and inference with Weights & Biases for live training progress. Pretrained models are accessible for swift application.
open-webui
Open WebUI provides an offline, extensible, and feature-rich WebUI supporting multiple LLM runners like Ollama and OpenAI-compatible APIs. Setup is streamlined via Docker or Kubernetes, enhancing API integration across models. The platform offers plugin support, responsive design, and mobile PWA capabilities, along with Markdown, LaTeX, and advanced communication features. Users can build models, integrate Python functions, perform web-related tasks, and utilize image generation, all while ensuring secure, multilingual access with regular updates.
manga-image-translator
The project provides an efficient solution for translating texts in manga and images, mainly supporting Japanese with additional compatibility for Chinese, English, and Korean. Features include inpainting, text rendering, and colorization. It supports Python (>=3.8) and offers accelerated performance with NVIDIA GPU and docker. Flexible installation allows local or docker setups, supporting batch, demo, web, and API modes. Designed to engage hobbyists, it encourages community contributions and provides a user-friendly interface with sample images and an online demo.
OpenDAN-Personal-AI-OS
OpenDAN offers a Personal AI Operating System integrating multiple AI modules for diverse applications such as digital assistants and smart device controllers. It supports a wide range of functionalities from data management to AI workflow creation, made easily accessible through Docker. Developed collaboratively, OpenDAN is continuously updated and enhanced, facilitating innovative integrations including AIGC. It prioritizes user control and privacy, ensuring a balanced AI-powered experience.
aurora
Aurora provides open-source GPT3.5 access with a user-friendly interface and no login required. Multiple deployment options, such as Docker and Vercel, allow customization and flexibility. Suitable for various needs, from standard to advanced configurations, it enables efficient interaction with GPT3.5 models.
deepo
Deepo is an open-source project designed to simplify the creation and management of deep learning environments. It allows customization of Docker images through modular assembly and automatic dependency resolution, compatible with CUDA, cuDNN, TensorFlow, PyTorch, among others. While no longer maintained, Deepo supports GPU acceleration and offers pre-built images inclusive of Linux, Windows, and OS X platforms, making it a beneficial resource for reducing configuration complexities in deep learning workflows.
AutoGPT-Next-Web
Easily deploy a customized AutoGPT-Next-Web application in minutes with Vercel, featuring enhanced local language support and a responsive interface similar to AgentGPT. With options for Docker and Azure OpenAI API integration, along with secure access controls, this platform is ideal for creating a personalized 'AutoGPT' site. Explore commercial version features through community engagement.
serving
TensorFlow Serving provides a stable and scalable platform for deploying machine learning models in production environments. It integrates effortlessly with TensorFlow while accommodating different model types and supporting simultaneous operation of multiple model versions. Notable features include gRPC and HTTP inference endpoints, seamless model version updates without client-side code alterations, low latency inference, and efficient GPU batch request handling. This makes it well-suited for environments seeking effective model lifecycle management and version control, enhancing machine learning infrastructures with adaptable and reliable functionalities.
LLocalSearch
Consider a locally-operated system that respects privacy by running Large Language Models efficiently. This tool is designed to be cost-effective, working seamlessly with low-end hardware, and includes comprehensive live logs for deeper insights. It features a user-friendly design, adaptable light and dark modes, and follow-up question support, ensuring privacy without API keys. The interface is straightforward and adaptable, inspired by Obsidian, with upcoming enhancements such as LLama3 support, user accounts, and persistent memory features. Perfect for individuals searching for unbiased, alternative search tools.
platform
Huly Platform is a flexible framework designed to facilitate the development of business applications, including CRM systems. It integrates seamlessly with tools like Chat and Project Management, supporting diverse development needs. Self-hosting options via Docker allow for efficient deployment on personal servers across amd64 and arm64 architectures. The platform ensures rapid setup with Microsoft's Rush, while unit and UI tests bolster reliability, making it a suitable choice for developers seeking to optimize their workflow.
docker-llama2-chat
Learn to efficiently deploy both official and Chinese LLaMA2 models with Docker for local use. This guide provides detailed instructions and scripts for setting up 7B and 13B models, suitable for GPU or CPU. Ideal for developers looking to test language models, it highlights the capabilities and advantages of using these models in different applications.
YiVal
YiVal provides an advanced solution for automating prompt and configuration optimizations in GenAI applications. This tool simplifies prompt adjustments, ensuring improved performance and minimized latency without manual intervention. It utilizes data-driven insights to refine model parameters, addressing challenges including prompt creation, fine-tuning complexity, scalability, and data changes. YiVal supports applications in achieving better results and cost savings effectively. Refer to the quickstart guide for seamless integration.
rapidpages
Rapidpages is a development tool that generates code from UI descriptions using React and Tailwind, enabling developers to create visually appealing web pages. This prompt-based platform supports local and cloud usage, offering flexibility. Future enhancements aim at supporting complex UI generation through process modularization. GitHub OAuth and OpenAI integration facilitate seamless login and extended functionalities.
openui
OpenUI allows developers to visualize and prototype interactive user interfaces in real-time. With support for conversion to multiple frameworks like HTML, React, and Svelte, and compatibility with models such as OpenAI and Groq, it offers versatile integration options. Experience the live demo or configure locally with Docker for efficient setup.
gpt4free
This project demonstrates the development of an API package with multi-provider requests, featuring key functionalities such as timeouts, load balancing, and flow control. It is designed for developers interested in effective integration of diverse API capabilities. The repository includes up-to-date information and detailed installation guides. Users can set up the API using Docker or Python, with support for local inference and web UI. Community feedback and contributions are encouraged to further improve functionality and compatibility.
refact
The repository provides a WebUI for fine-tuning and self-hosting open-source code models, facilitating enhanced code completion and chat features within Refact plugins. It supports easy Docker-based server hosting, use of multiple models on a single GPU, and integration with external GPT-models through third-party keys. Notable features include model sharding, download and upload of Lloras, and compatibility with models like Refact/1.6B and the starcoder2 series. Comprehensive plugin support for VS Code and JetBrains allows seamless integration into development workflows, making it suitable for small teams or individual developers under the BSD-3-Clause license, with enterprise options available.
codel
Codel is a fully autonomous AI agent designed to execute complex tasks within a secure Docker environment. It features a built-in browser and editor, saves tasks history in PostgreSQL, and allows self-hosting. Codel integrates seamlessly with OpenAI and Ollama models for efficient task automation, enhancing workflow management through its modern UI and comprehensive toolset.
DeepLearningProject
This tutorial provides a detailed guide on developing a machine learning pipeline with PyTorch. It involves creating custom datasets, exploring traditional algorithms, and transitioning to deep learning. Based on a Harvard graduate course project, it includes updated PyTorch code and clear setup instructions. Available in both HTML and IPython Notebook formats, it is designed for those aiming to expand their machine learning knowledge.
OpenLLM
OpenLLM simplifies the deployment of open-source and custom LLMs as OpenAI-compatible APIs. Its features include a chat UI and robust inference capabilities, aiding in cloud deployment with Docker, Kubernetes, and BentoCloud. Supporting models such as Llama 3.2 and Qwen 2.5, it ensures easy integration and optimal local hosting, compatible with Hugging Face tokens for gated models.
ragapp
RAGapp enables seamless Agentic RAG deployment in enterprise cloud environments using Docker and LlamaIndex. It supports AI models from OpenAI and Gemini, offers intuitive interfaces like Admin and Chat UIs, and plans enhanced security with access token authorization. Deploy smoothly with Docker Compose or Kubernetes and ensure effective traffic management via API Gateway integration. Reach out for support at any time.
huginn
Huginn is open-source software for creating custom agents that automate online tasks, such as monitoring weather, tracking social media, and detecting deals. As a self-hosted alternative to IFTTT or Zapier, it prioritizes data privacy and control, allowing integration with various services and custom workflows through JavaScript or CoffeeScript. Suitable for flexible deployment on platforms like Docker, Heroku, or OpenShift, Huginn invites community support and contributions.
text-embeddings-inference
Text Embeddings Inference provides a high-performance framework for deploying text embeddings and sequence classification models. It includes optimized extraction for popular models like FlagEmbedding and GTE with efficient Docker support, eliminating model graph compilation and facilitating fast booting. This toolkit supports various models including Bert and CamemBERT, offering features like dynamic batching and distributed tracing, suitable for diverse deployment environments.
azure-openai-proxy
Azure OpenAI Proxy efficiently transforms OpenAI API requests into Azure-compatible formats. Supports GPT-4 and Embeddings, and allows seamless integration without additional costs. Ideal for managing model deployments with Docker or direct API calls, enhancing interoperability within the OpenAI project ecosystem.
dalle-playground
Discover the capabilities of text-to-image technology with this playground featuring Stable Diffusion V2. The interface is updated for ease of use and replaces DALL-E Mini, offering powerful image generation. Ideal for tech enthusiasts, it integrates smoothly with Google Colab for quick setups and supports local development in diverse environments such as Windows WSL2 and Docker-compose. Enjoy efficient creation of stunning visuals with a straightforward setup process, catering to developers and creatives interested in advanced AI solutions.
chatgpt-plus
Discover an application developed with Nextjs and Nestjs, featuring the official ChatGPT API for efficient integration. Key functionalities include mobile-friendly responsive design, customizable themes, and language support in both English and Chinese. Offers options between 'ChatGPTAPI' for a reliable, paid service using OpenAI's official API, or 'ChatGPTUnofficialProxyAPI' for complimentary access via a third-party proxy. Facilitates deployment through Docker and Vercel, providing ease of installation and scalability. Suitable for developers interested in utilizing ChatGPT's capabilities with added features and deployment flexibility.
Feedback Email: [email protected]