#GPU support
keras-js
Keras.js enables running Keras models in browsers with GPU acceleration via WebGL, compatible with backends like TensorFlow and CNTK. Despite being inactive, its demos, including MNIST CNN and ResNet-50, offer insights on model execution, supporting Node.js in CPU mode. For updates, refer to TensorFlow.js. Compatible with Keras 2.1.2.
booster
Discover an LLM inference accelerator utilizing Golang and C++ to deliver high performance and scalability for both production and experimentation needs. Supporting CPUs and GPUs, including Nvidia CUDA and Apple Metal, it functions across multiple hardware environments without Python dependencies. Compatible with popular LLM architectures like LLaMA and Mistral, it features advanced Janus Sampling for effective code generation and non-English language processing, facilitating seamless integration in real-world applications.
docker-whisperX
Discover how Docker-whisperX facilitates efficient Docker management by integrating automatic speech recognition with detailed timestamps and speaker identification. Utilizing GitHub Free runner, it supports weekly parallel workflows to handle 175 Docker images of 10GB, leveraging caching and optimization strategies. Offers GPU support for Windows, Linux, and OSX, with pre-built images covering diverse language models. Supports customized image creation with specific languages, or choose Red Hat UBI for extra security and performance, making it suitable for scalable and efficient Docker usage in speech recognition applications.
Flux.jl
Flux presents a straightforward method for machine learning using a pure-Julia framework. Its efficient abstractions take advantage of Julia's native GPU and automatic differentiation features, facilitating the handling of complex tasks. Compatible with Julia 1.9 or newer versions, it offers simple setup and detailed documentation to support experimentation. A valuable tool for researchers and developers in the Julia ecosystem seeking a versatile machine learning framework.
pomegranate
Pomegranate is a probabilistic modeling library optimized with PyTorch, facilitating faster computations and modular design. It supports GPU acceleration, mixed precision, and easy integration with neural networks, allowing complex Bayesian networks and hidden Markov models to be employed seamlessly. The v1.0.0 update, although not backwards compatible, enhances performance and addresses previous community concerns with a focus on semi-supervised learning capabilities.
recommenders-addons
TensorFlow Recommenders Addons enhances TensorFlow by integrating dynamic embedding technology, facilitating scalable models for search, recommendation, and advertising applications. The module supports GPU-based training and inference, ensuring compatibility with TensorFlow's built-in optimizers and efficient key-value embeddings. Additionally, it integrates seamlessly with Triton Inference Server and is installable via PyPI, supporting various TensorFlow versions including those optimized for Apple silicon. The project benefits from contributions by organizations such as NVIDIA and Tencent.
neoml
NeoML is a versatile machine learning framework for developing, training, and deploying models on platforms including Windows, Linux, macOS, iOS, and Android. It is designed for applications such as computer vision, natural language processing, and OCR. NeoML supports various neural network layers and machine learning algorithms and executes tasks on both CPU and GPU. It integrates with Python, C++, Java, and Objective-C, supports ONNX format for compatibility, and offers features like multi-threading and GPU optimization, enabling efficient processing of both structured and unstructured data.
candle
Candle is a Rust-based machine learning framework emphasizing efficient performance and ease of use, compatible with both CPU and GPU. It streamlines serverless inference by reducing the bulk of conventional frameworks, assisting developers in quick and effective deployment. Candle offers models like LLaMA, Stable Diffusion, and YOLO, as well as strong language and vision functionalities. Its features include CUDA support, multi-GPU distribution, and WASM for web-based models. Discover its rich examples ranging from text generation to image segmentation, suitable for diverse machine learning applications.
btop
Btop++ provides Linux, macOS, and BSD users with a resource monitoring tool that displays detailed statistics for processors, memory, disks, networks, and processes. It includes support for Intel GPUs and NetBSD, building upon bashtop and bpytop to enhance monitoring capabilities. The tool features a user-friendly interface with mouse support and customizable themes. Recent updates include GPU monitoring, expanding functionality for developers and tech enthusiasts seeking insights for effective system management.
rust
TensorFlow Rust provides Rust language bindings for TensorFlow, supporting CPU and GPU tasks via idiomatic integration. It offers both automatic and manual installation options, catering to custom builds or unreleased versions. Actively developed with comprehensive documentation and community support, it is an excellent choice for leveraging TensorFlow's capabilities with Rust's advantages.
Feedback Email: [email protected]