#GPU Acceleration

Logo of pytorch
pytorch
PyTorch provides tensor computation with GPU support and dynamic neural networks using an autograd system. It integrates with Python, allowing use of libraries like NumPy and SciPy for flexible scientific computations. Features include a dynamic network structure, memory-efficient usage, and simple extensibility. Suitable for researchers and developers exploring AI and machine learning.
Logo of NeMo-Curator
NeMo-Curator
NeMo Curator is a GPU-optimized open-source library designed to speed up dataset preparation in generative AI contexts. Utilizing Dask and RAPIDS, it provides efficient modules for curating multilingual text and images, thereby enhancing training and tuning processes. Features such as language identification, filtering, and deduplication support various AI tasks, including pretraining and fine-tuning. Its modular approach allows for the customization of data workflows while maintaining objectivity and clarity.
Logo of languagemodels
languagemodels
The Python package allows efficient use of large language models on systems with only 512MB RAM, facilitating tasks such as instruction following and semantic search with data privacy. It enhances performance through GPU acceleration and int8 quantization. Ideal for developing chatbots, accessing real-time information, and educational purposes, the package is easy to install and suited for both learners and professionals, supporting educational and potential commercial use cases.
Logo of IQA-PyTorch
IQA-PyTorch
This powerful toolbox leverages Python and PyTorch to offer efficient, GPU-accelerated image quality assessment. It supports a wide range of full reference and no-reference metrics, providing evaluations faster than those of traditional MATLAB scripts. The toolbox includes recent updates such as improved metrics and reduced GPU memory requirements. Users can install and utilize it via the command line or integrate it into larger projects with customizable settings and loss function integration. Benchmark performance comparisons and datasets are also provided for robust evaluations.
Logo of stable-diffusion.cpp
stable-diffusion.cpp
Discover a minimalist C/C++ system for Stable Diffusion and Flux inference, seamlessly integrating with tools like ggml and supporting a wide range of versions including SD1.x, SD2.x, and SDXL. Inspired by llama.cpp, the project enhances memory efficiency and accelerates CPU and GPU performance via CUDA, Metal, Vulkan, and SYCL. It offers comprehensive support for diverse weights, easy quantization, and intuitive sampling methods, presenting a versatile and optimized solution for developers. With compatibility across Linux, Mac OS, Windows, and Android, this project ensures broad accessibility and integration options.
Logo of jax
jax
JAX is a Python library for efficient numerical computing and large-scale machine learning on accelerators like GPUs and TPUs. It provides automatic differentiation for Python and NumPy functions and compiles programs for optimal execution. With transformations like 'grad' for differentiation and 'jit' for just-in-time compilation, JAX simplifies the development of sophisticated algorithms. Contributions are welcomed through feedback and bug reporting.