#C++

Logo of VideoPipe
VideoPipe
VideoPipe is a C++ framework for video analysis across platforms, offering easy plugin integration and support for protocols like RTSP and RTMP. Suited for face recognition and traffic analysis, it provides flexible configuration with minimal dependencies and supports inference backends like OpenCV and TensorRT.
Logo of ai.deploy.box
ai.deploy.box
AiDB provides a unified interface for deploying deep learning models in C++, compatible with frameworks like ONNXRUNTIME, MNN, NCNN, TNN, PaddleLite, and OpenVINO. Supporting platforms such as Linux, MacOS, and Android, it features demos in Python, Lua, and Go, aiming to simplify deployment with user-friendly and flexible designs. This toolbox is ideal for streamlining AI model deployment across diverse environments.
Logo of compile-time-regular-expressions
compile-time-regular-expressions
The library allows the integration of compile-time regular expressions in C++ with features such as matching, searching, and capturing at compile-time or runtime. Supporting Unicode and UTF-8, and compatible with C++17 and C++20, it enhances performance through compile-time processing. It implements most of the PCRE syntax for diverse use cases. Developers can integrate it with CMake or download via vcpkg, supported by examples for practical use like data extraction and Unicode handling. Compile-time regular expressions offer efficiency in C++ projects.
Logo of indicators
indicators
The indicators project delivers a single-header C++ library for thread-safe progress indicators, including various types of progress bars and spinners. It is ideal for enhancing the user interface experience during long processes. The library offers flexible usage options and is licensed under MIT.
Logo of flashlight
flashlight
Flashlight is a versatile C++ library designed by top AI researchers for machine learning, providing significant internal modifiability and efficient performance. It features a core of under 10 MB, utilizing modern C++ and the ArrayFire library for optimal performance in areas like speech recognition, image classification, and more. It supports both CUDA and CPU backends, offers high configurability, and integrates seamlessly with vcpkg and Docker setups, making it an ideal choice for researchers interested in flexible experimentation.
Logo of knowhere
knowhere
This guide offers detailed instructions for compiling a C++ project from source, specifically designed as an internal core for Milvus. It invites contributions from developers, especially those using Ubuntu, CentOS, and macOS systems. The document outlines system prerequisites, package installation, source code assembly, and testing procedures. Furthermore, it includes steps for creating Python wheels, performing clean-up tasks, and executing pre-commit checks. This comprehensive guide assists developers in enhancing their Knowhere building experience, promoting cross-platform support and collaborative progress without bias.
Logo of axodox-machinelearning
axodox-machinelearning
This project provides a complete C++ solution for Stable Diffusion image synthesis, eliminating the need for Python and enhancing deployment efficiency. It includes txt2img, img2img, inpainting functions, and ControlNet for guided generation. The library, optimized for DirectML, is aimed at real-time graphics and game developers, offering GPU-accelerated feature extraction. Prebuilt NuGet packages are available for seamless integration into Visual Studio C++ projects.
Logo of dlib
dlib
The dlib C++ library provides an extensive suite of machine learning tools and algorithms for efficient software development. It supports AVX instructions, CMake, and vcpkg integration, and is available for both C++ and Python environments. Under the Boost Software License, dlib is suited for a range of applications, supported by notable research funding.
Logo of frugally-deep
frugally-deep
Discover a streamlined method to incorporate Keras models into C++ applications using this lightweight, header-only library. It enables efficient model predictions without relying on TensorFlow, providing smaller binary sizes and single-core CPU operation. This library is optimized for ease of integration and supports both Keras sequential and functional API models, accommodating complex architectures with popular dependencies like FunctionalPlus and Eigen. Suitable for developers looking to integrate machine learning into C++ efficiently.
Logo of awesome-cpp
awesome-cpp
Discover an expansive selection of curated C++ libraries and frameworks. This resource spans standard libraries, AI, audio processing, to database management, equipping developers with essential tools for varied programming challenges. Stay informed with the latest advancements in open-source technology for robust development and design.
Logo of write-you-a-vector-db
write-you-a-vector-db
Explore a detailed tutorial for integrating vector functionalities into relational database systems using C++ and Rust. Learn to implement features similar to pgvector on a modified BusTub system or add vector capabilities to the RisingLight system. Join the community on Discord for collaboration. The tutorial is shared under the MIT license, with some restrictions related to CMU-DB course content.
Logo of pytorch-cpp
pytorch-cpp
This project provides C++ tutorials for implementing PyTorch, designed for deep learning researchers. Tutorials are divided into basic, intermediate, and advanced sections, covering topics such as linear regression, CNNs, RNNs, and GANs. It supports macOS, Linux, and Windows, with setup instructions using CMake and Conda. Users can access interactive learning on platforms like Google Colab and Docker for practical deep learning insights.
Logo of TrinityCore
TrinityCore
Discover this open source MMORPG framework based on MaNGOS, designed to improve in-game mechanics and functionality. Built with C++, it encourages contributions and collaboration on Github. Available for Windows, Linux, and macOS, it includes comprehensive installation guides. Participate in its development through forums or Discord to enhance this evolving framework.
Logo of mlpack
mlpack
The fast, header-only C++ library mlpack provides flexible machine learning solutions with bindings for Python, R, Julia, and Go. Designed as a versatile tool similar to LAPACK, it facilitates quick integration, deployment, and interactive prototyping across various machine learning methods. Discover its robust C++ interface and comprehensive documentation for advancing machine learning projects.
Logo of JamSpell
JamSpell
JamSpell provides a fast, context-aware spell checking solution for multiple languages, achieving nearly 5,000 words per second. It supports SWIG bindings and languages like Java and C#. The Pro version enhances accuracy with CatBoost, allows runtime additions, and includes pre-trained models for numerous languages.
Logo of eos
eos
EOS is a disk-based storage system optimized for multi-PB applications, utilizing XRootD for remote access. Compatible with diverse CERN needs, it runs on commodity hardware in JBOD setups and supports multiple access protocols, including native XRootD, POSIX-like FUSE, and HTTP(S). Developed mainly in C/C++, EOS offers comprehensive documentation and community support. Its open-source licensing under GPL v3 encourages ongoing development and integration, making it a viable solution for large-scale data storage technology.
Logo of sobjectizer
sobjectizer
SObjectizer is a framework that facilitates the development of concurrent C++ applications with support for Actor Model, Publish-Subscribe Model, and CSP-like channels. Its well-established environment ensures reliability in cross-platform development across Windows, Linux, FreeBSD, macOS, and Android. The API is intuitive and accompanied by numerous examples, while its BSD-3-Clause license allows free use in commercial software. The companion project so5extra provides additional features, extending functionalities beyond the main framework.
Logo of gpu.cpp
gpu.cpp
gpu.cpp facilitates cross-platform GPU computation in C++ using the WebGPU specification for seamless GPU interfacing. Compatible with Nvidia, Intel, AMD, and more, it operates on diverse hardware including laptops, mobile devices, and desktops. Notable features are its minimalistic API, rapid compile/run times under 5 seconds, and a lack of extensive dependencies. The header-only design supports swift development cycles, ideal for developers and researchers aiming for effective GPU use without extensive setup. Discover its capabilities with examples like matrix multiplication and physics simulations, enabling easy GPU exploitation across different platforms.
Logo of benchmark
benchmark
Discover this open-source library that benchmarks C++ code snippets, akin to unit testing, requiring C++14. It integrates with Google Test and offers guidance on cmake installation, Python bindings, and configuration options. Explore stable and experimental APIs to improve code performance, ideal for developers seeking efficient benchmarking solutions.
Logo of LlamaGPTJ-chat
LlamaGPTJ-chat
Discover a C++ command-line chat application supporting GPT-J, LLaMA, and MPT models. Utilizing llama.cpp and gpt4all-backend, this program offers cross-platform compatibility. Available for Linux, MacOS, and Windows, it provides precompiled binaries and an easy installation process. Features include customizable AI personalities, interactive commands, and reset capabilities. Users can manage models with features like chat log storage and JSON parameter configuration. Designed for both new and experienced users seeking advanced AI engagement.
Logo of taskflow
taskflow
Taskflow improves parallel programming by providing efficient task decomposition strategies with both regular and irregular compute patterns. It features work-stealing scheduling, conditional and heterogeneous tasking, and modular graph composition for optimal CPU-GPU performance. The built-in profiling tools enable workflow visualization and optimization, suitable for scientific computing and industrial applications. Many leaders in academia and industry utilize this system for advanced task graph computing.
Logo of onnx-tensorrt
onnx-tensorrt
Optimize deep learning workflows with the TensorRT backend designed for ONNX model execution. This project aligns with TensorRT 10.5, ensuring full-dimension and dynamic shape processing. It integrates seamlessly with C++ and Python tools such as trtexec and polygraphy, enhancing model parsing efficiency. Comprehensive documentation, including FAQs and changelogs, aids in adaptive CUDA environment setups, making it a robust choice for ONNX deployment across experience levels.
Logo of kaldi
kaldi
Kaldi provides a robust toolkit for speech recognition across UNIX, Windows, mobile, and web platforms. It includes setup instructions for UNIX systems and platform-specific guidance for PowerPC, Android, and Web Assembly. The toolkit adheres to Google C++ Style Guides, facilitating contributions through a detailed development workflow and community forums. Explore technical documentation and C++ coding tutorials on the project's website for efficient integration in diverse environments.
Logo of Open3D
Open3D
Open3D is an open-source library designed for 3D data processing using optimized C++ and Python APIs. It includes key features such as 3D data structures, algorithms, scene reconstruction, and visualization tools. Its support for machine learning with PyTorch and TensorFlow is enhanced by GPU acceleration. Open3D suits complex 3D applications, with comprehensive documentation and an active community. Users can install pre-built packages or compile from source for tailored setups.
Logo of YOLOv8-TensorRT-CPP
YOLOv8-TensorRT-CPP
This C++ implementation of YOLOv8 via TensorRT excels in object detection, semantic segmentation, and body pose estimation. Optimized for GPU inference, the project utilizes the TensorRT C++ API and facilitates integration with ONNX models converted from PyTorch. The project runs on Ubuntu, necessitating CUDA, cudnn, and CUDA-supported OpenCV. Users will find comprehensive setup instructions, model conversion guidance, and INT8 inference optimization tips. This project is ideal for developing high-performance vision applications on NVIDIA GPUs.