#Python API
ml-agents
The Unity ML-Agents Toolkit is an open-source project that allows for intelligent agent training within game and simulation environments. Utilizing advanced algorithms such as reinforcement learning and imitation learning, it serves both game developers and researchers by offering a flexible Python API. The toolkit supports various training scenarios, including single and multi-agent setups, and enables cross-platform compatibility with Unity's Sentis. Explore features like curriculum learning, custom training algorithms, and integration with gym and PettingZoo formats. This toolkit aids in the development of improved NPC behaviors and aids researchers in evaluating game design and conducting automated game testing.
GPT-Telegramus
GPT-Telegramus v5 enables seamless interaction with AI platforms such as ChatGPT, Microsoft Copilot, Gemini, and Groq, supporting multilingual communication and media handling. The bot offers features like admin controls and data logging, providing users with a flexible tool for accessing AI capabilities in a streamlined manner.
langchain-kr
Explore an insightful Korean tutorial offering free e-books, YouTube guides, and blog resources aimed at enhancing LangChain usage. Learn practical applications like local LLM hosting, task automation, and AI model implementation. Enjoy frequent updates and diverse tutorials specifically designed for Korean users, providing comprehensive insights into LangChain's functionalities. Suitable for developers and enthusiasts looking to expand their expertise in LangChain's capabilities.
openlogprobs
Openlogprobs is a Python API that uses specific algorithms to uncover log-probabilities from language model APIs, often hidden due to security reasons and data size limitations. Utilizing logit bias adjustments and methods like binary search, it efficiently extracts full probability vectors from APIs such as OpenAI's. It offers methods like 'topk', 'exact', and 'binary search' based on API capabilities, making it ideal for researchers and developers interested in understanding AI model outputs. Besides, it contributes to notable academic papers like 'Language Model Inversion'.
component-template
Explore a range of templates and examples for creating Streamlit Components with flexible frontends using different web technologies and a Python API. These components are designed for seamless integration into Streamlit apps and can be made available through PyPI. The repository provides comprehensive instructions on setting up a Python environment, creating the component frontend, and running associated Streamlit apps. It also showcases examples for non-React templates and collaboration with third-party tools, supporting contributions from the community.
aiyprojects-raspbian
Access a detailed Python API for AIY Vision and Voice Kits offering straightforward guides on assembly, updates, and troubleshooting. Discover extensive resources including tutorials, example code, and support forums, ideal for enhancing Raspberry Pi projects with AI-driven functionalities.
COMET
Discover COMET's capabilities in improving translation precision via context use and error identification. Recent updates introduce XCOMET models for refined error classification by MQM typology, and DocCOMET for document-level context evaluations. This system supports discourse phenomena and chat translation quality assessments. Learn about the models and their roles in enhancing machine translation evaluations.
lighteval
Lighteval facilitates the evaluation of large language models (LLMs) with a versatile toolkit supporting various backends such as transformers, tgi, vllm, and nanotron. It enables detailed exploration of model performance and customization of tasks and metrics. Results can be stored on Hugging Face Hub, S3, or locally, supporting experimentation and benchmarking. With an easy-to-integrate Python API, it is suitable for a wide range of developers. Evolving from Eleuther AI Harness and taking cues from the HELM framework, it excels in speed, completeness, and multi-platform compatibility.
manga-ocr
Manga OCR offers specialized recognition for Japanese manga, handling complex text scenarios such as vertical and horizontal text and low-quality images in one pass. It integrates smoothly with tools like ShareX for efficient text extraction, perfect for learners and enthusiasts alike.
fastllm
Fastllm is a pure C++ library requiring no third-party dependencies, ensuring high-performance inference across platforms like ARM, X86, and NVIDIA. It supports Hugging Face model quantization and OpenAI API server setups, aiding multi-GPU and CPU deployments with dynamic batching. Featuring a front-end and back-end separation for better device compatibility, it integrates with models such as ChatGLM and LLAMA. Python support also allows custom model structures with extensive documentation for straightforward setup and use.
mistral.rs
Mistral.rs facilitates fast LLM inference with advanced quantization and optimized processing for various devices, including Apple silicon, CPUs, and CUDA. It allows integration through Rust and Python APIs and supports an OpenAI-compatible server for multi-platform deployment. Features include efficient MoE models, direct quantization, and sampling techniques to improve machine learning workflows. Access prequantized models and employ prompt chunking and adaptable LoRA adapters for enhanced efficiency.
nlg-eval
This repository offers code to evaluate unsupervised metrics in Natural Language Generation. It processes hypothesis and reference inputs to compute various metrics like BLEU, ROUGE, and CIDEr. Java and Python dependencies are needed for installation, and it can be used via command line or Python API. It supports assessments of both individual and multiple examples, providing flexibility for research applications. Users can customize settings for metrics like CIDEr to enhance precision.
elevenlabs-python
Experience comprehensive text-to-speech capabilities with the Python library by ElevenLabs. This API is intended for developers and content creators, offering vibrant, realistic voices across numerous languages and accents efficiently. Featuring advanced models such as Eleven Multilingual v2 and Eleven Turbo v2.5, the library ensures consistent performance with a focus on diversity and speed. Installation and integration are straightforward, allowing users to generate audio, clone voices, and adjust settings to meet various project needs. This makes it suitable for anyone in search of professional-quality audio tools.
ViZDoom
ViZDoom enables AI development for playing Doom using visual inputs, serving as a crucial research tool in visual machine learning and deep reinforcement learning. Available for Linux, macOS, and Windows, it provides Python and C++ APIs. Key features include custom scenario creation, multi-platform support, and adjustable settings, supporting both asynchronous and synchronous multiplayer modes. Gymnasium environment wrappers enhance its utility in reinforcement learning studies, making it suitable for learning from demonstrations and other advanced learning techniques.
virtualhome
VirtualHome offers a Python API-driven platform to simulate household activities, focusing on complex agent interactions in dynamic, procedurally-generated environments. With features like realistic physics and multi-agent support, and updates enhancing graphics and time systems, it's ideal for embodied AI research. Explore detailed documentation to leverage its rich simulation functions.
infinity
Infinity's AI-native database is engineered for optimal performance with large language model applications, providing fast and versatile search across multiple data types. It excels in low-latency and high QPS query performance, making it ideal for search, recommendation systems, and content generation. With an intuitive Python API and single-binary deployment, Infinity is easily accessible for AI developers. Explore embedded and client-server modes through documentation and community resources.
wikipron
With over 3 million word/pronunciation pairs, WikiPron provides an essential resource for linguists and developers. This versatile command-line tool and Python API enable detailed extraction of multilingual pronunciation data from Wiktionary. Users can specify language, dialect, and transcription level for precise and customized data collection. Advanced options enhance scraping capabilities, facilitating seamless data management and research integration. Unleash a variety of pronunciation resources to enrich linguistic models and analyses.
mlx
MLX is a machine learning framework optimized for Apple Silicon, featuring APIs aligned with NumPy and PyTorch. It includes composable function transformations, lazy computation, dynamic graph construction, and a unified memory model for efficient multi-device operations. Suitable for researchers exploring new ideas, MLX supports diverse examples such as text generation and image synthesis. Available on PyPI, it supports Python, C++, and more, offering a blend of user-friendliness and high efficiency in model training and deployment.
tiny-tensorrt
Discover a user-friendly NVIDIA TensorRT wrapper for deploying ONNX models in C++ and Python. Despite its lack of ongoing maintenance, tiny-tensorrt emphasizes efficient deployment using minimal coding. Dependencies include CUDA, CUDNN, and TensorRT, easily setup through NVIDIA's Docker. With support for multiple CUDA and TensorRT versions, it integrates smoothly into projects. Documentation and installation guidance are available on its GitHub wiki.
Tensorflow-bin
The project provides an optimized Tensorflow Lite binary for Raspberry Pi with XNNPACK support to boost on-device inference performance. It supports various Raspberry Pi models and OS versions, offering efficient machine learning capabilities with easy setup. The binary includes Python API support and installation instructions for Tensorflow v1 and v2 across different Linux environments. Previous Wheel versions are accessible, and guidelines for compiling Tensorflow C bindings are available.
Feedback Email: [email protected]