#neural networks

Logo of sonnet
sonnet
Sonnet, created by DeepMind researchers, provides a flexible programming structure for machine learning advancements using TensorFlow 2. It emphasizes modularity with `snt.Module`, aiding in the development of neural networks adaptable to various learning forms. Sonnet supports both predefined modules and custom-built ones, such as `snt.Linear`, `snt.Conv2D`, and `snt.nets.MLP`. While lacking an integrated training framework, it empowers users to leverage existing solutions or create new ones, supporting distributed learning. Simple installation and illustrative examples on Google Colab make Sonnet accessible for constructing complex machine learning models.
Logo of lectures
lectures
Discover a cutting-edge course on Natural Language Processing focusing on neural networks' applications in speech and text analysis. Delve into essential topics like sequential language modeling and transduction tasks, complemented by hands-on projects on CPU and GPU hardware. This course is directed by Phil Blunsom with the DeepMind Natural Language Research Group, aiming to deepen understanding of neural networks in NLP.
Logo of CNTK
CNTK
Microsoft Cognitive Toolkit (CNTK), an open-source deep learning framework, models neural networks as computational graphs for seamless execution of architectures like DNNs, CNNs, and RNNs. It incorporates stochastic gradient descent with automatic differentiation, supporting multi-GPU and server parallelization, suitable for intensive deep learning applications. Though major updates have halted, CNTK maintains compatibility with the latest ONNX standards, promoting AI framework interoperability. Extensive resources are available for users to explore and optimize the toolkit’s features.
Logo of sparseml
sparseml
SparseML is an open-source toolkit that optimizes neural networks using sparsification techniques, including pruning, quantization, and distillation. These methods create faster, smaller models while maintaining performance. SparseML integrates with PyTorch and Hugging Face and supports Sparse Transfer Learning through SparseZoo pre-trained models. Additionally, it converts optimized models to ONNX for deployment with DeepSparse, achieving GPU-level performance on CPUs. The toolkit provides a flexible recipe-based approach to model optimization with comprehensive tutorials and popular ML framework integrations.
Logo of awesome_deep_learning_interpretability
awesome_deep_learning_interpretability
This collection presents extensive research on deep learning model interpretability, emphasizing progress in model explanation. It comprises 159 key papers, sorted by citation, with Tencent Weiyun PDF links for easy access. Regular updates maintain its relevance, featuring various interpretability techniques, examples, and practical AI applications in different fields. Discover notable papers such as 'Score-CAM' and 'Interpretable CNNs' to enhance your insights into neural network transparency.
Logo of byol-pytorch
byol-pytorch
Explore a practical implementation of BYOL in Pytorch for self-supervised learning that simplifies the process by removing the need for contrastive learning and negative pairs. Seamlessly integrate with any image-based neural network using unlabelled data. Features include recent updates like group norm and weight standardization for optimization. Delve into augmentation and distributed training to improve network efficiency on supervised tasks, providing a cost-effective solution.
Logo of equinox
equinox
Equinox is a versatile JAX library designed to simplify model construction using PyTorch-inspired syntax. Offering advanced capabilities like PyTree manipulation and runtime error management, it integrates effortlessly within the JAX ecosystem, ensuring compatibility with various operations and libraries. Equinox serves as a solid choice for developers transitioning from Flax or Haiku, thanks to its additional features and enhanced model optimization through JIT and grad boundaries. It requires Python 3.9+ and JAX 0.4.13+ for installation, making it a practical, non-framework tool for researchers and developers.
Logo of synjax
synjax
SynJax is a neural network library designed for JAX, emphasizing structured probability distributions such as Linear Chain CRF and Semi-Markov CRF. It supports essential operations like log-probabilities and entropy calculations, utilizing JAX transformations for optimized performance. SynJax provides easy installation aligned with JAX's guidelines and includes practical examples in its notebooks for effective learning.
Logo of glow
glow
Glow is a machine learning compiler designed for hardware accelerators, providing integration with high-level frameworks. It enhances code generation for neural network graphs and employs compiler optimizations. Glow's lowering phase supports diverse input operators and hardware targets. The project is continuously developed with industry partners and runs on macOS and Linux with support for a modern C++ compiler. Comprehensive documentation and examples are available for integration and testing.
Logo of dm-haiku
dm-haiku
Haiku is a compact neural network library for JAX that offers an object-oriented programming approach integrated with JAX function transformations. Created by the developers of Sonnet for TensorFlow, Haiku focuses on efficient parameter and state management without adding extra frameworks. While currently in maintenance mode focused on bug fixes and compatibility, Haiku still offers key features like hk.Module and hk.transform, facilitating the transition from TensorFlow to JAX. It caters to large-scale project requirements and supports the incorporation of stochastic models and non-trainable states, extending to distributed model training through jax.pmap. Well-documented resources and examples assist users in leveraging Haiku effectively.
Logo of learning-to-learn
learning-to-learn
This open-source project uses TensorFlow and Sonnet to enhance optimization with learning-to-learn strategies, focusing on command-line tools for efficient problem-solving, including MNIST and CIFAR10 classifications. It supports quadratic functions and employs advanced optimizers such as L2L. The flexible design allows easy integration of new problems and adjustment of parameters like learning rate and epochs, effectively showcasing TensorFlow's optimization capabilities across diverse challenges. Not officially affiliated with Google.
Logo of Augmentor
Augmentor
Augmentor is a Python library for machine learning, offering independent image augmentation through a stochastic pipeline. It supports a variety of techniques like rotations and elastic distortions, useful for training neural networks. With multi-threading to boost performance and integration with Keras and PyTorch, it simplifies complex image processing.
Logo of tensorflow
tensorflow
TensorFlow is a leading open-source framework for machine learning, recognized for its comprehensive suite of tools and libraries. Initially developed by Google Brain, it facilitates cutting-edge ML research and application development. With stable Python and C++ APIs, it supports multiple languages and offers diverse installation options, including GPU capabilities. Engage with the TensorFlow community for collaborative advancement in machine learning.
Logo of nnstreamer
nnstreamer
NNStreamer facilitates the integration of neural network models into GStreamer applications. It provides developers with robust plugins to manage neural network pipelines and filters efficiently. The project supports multi-modal intelligence and composite models, enabling multiple neural networks within a single stream. Compatible with platforms such as Tizen, Ubuntu, Android, and macOS, NNStreamer also supports hardware acceleration with Movidius-X, Edge-TPU, and Qualcomm SNPE, enhancing deployment and execution of AI systems.
Logo of deepflame-dev
deepflame-dev
Explore DeepFlame, an open-source platform that combines OpenFOAM, Cantera, and PyTorch, offering advanced simulations of reactive flows using deep learning. The platform supports high-performance computing infrastructures and excels in simulating flows at varying speeds. Recent updates introduce enhanced solvers, two-phase flow models, and expanded neural network capabilities, ensuring efficient and effective CFD applications.
Logo of lightning-uq-box
lightning-uq-box
Discover a PyTorch library providing a range of Uncertainty Quantification (UQ) techniques suited for contemporary deep neural network frameworks. Lightning-UQ-Box facilitates seamless UQ integration in workflows for easy method comparison. It supports various UQ methods prioritizing reproducibility and streamlined code. Leverage Lightning Modules and Command Line Interfaces for effective experimentation. Explore methodologies from single forward pass to approximate Bayesian, generative models, and post-hoc approaches, contributing to UQ advancement in an open-source environment.
Logo of tensorflow-speech-recognition
tensorflow-speech-recognition
Discover insights into speech recognition using the TensorFlow sequence-to-sequence framework. Despite its outdated status, this project serves educational purposes, focusing on creating standalone Linux speech recognition. While new projects like Whisper and Mozilla's DeepSpeech lead advancements, foundational techniques remain essential. Packed with modular extensions and educational examples, it offers a platform for learning and experimentation. Detailed installation guides specify key dependencies such as PyAudio.
Logo of deepvariant
deepvariant
DeepVariant, a deep learning-based variant caller, effectively identifies genetic variants across multiple sequencing data types such as NGS, RNA-seq, PacBio, and Oxford Nanopore. It is optimized for variant calling in diploid organisms, supporting whole genome and exome sequences. DeepVariant uses CNNs to transform aligned reads into pileup images, classifying and reporting genotypes with precision. Though primarily trained on human data, its functionality is enhanced for trio or duo data by tools like DeepTrio and GLnexus. It operates efficiently on various infrastructure, including cloud computing, offering a robust solution for precision genomic analysis.
Logo of pytorch-lr-finder
pytorch-lr-finder
The PyTorch learning rate finder implements the learning rate range test to identify optimal learning rates for neural networks. It linearly or exponentially adjusts the learning rate during a pre-training phase, aiding in training with cyclical learning rates. The tool supports gradient accumulation and mixed precision training, providing visual plots for precise learning rate adjustment, thereby optimizing model performance and speeding up convergence in an objective manner.
Logo of tflearn
tflearn
TFLearn provides a flexible deep learning library based on TensorFlow, featuring an intuitive high-level API to accelerate experimentation. It offers diverse neural network models, configurable layers, and efficient functions for training. Fully compatible with TensorFlow, TFLearn supports both CPU and GPU configurations. The library enhances transparency with comprehensive graph visualizations and accommodates contemporary models like LSTM and generative networks. The latest version is aligned with TensorFlow v2.0+, ensuring up-to-date deep learning methodologies.
Logo of alpa
alpa
Alpa streamlines the scalable training and inference of large neural networks using automatic parallelization in distributed environments, leveraging integrations with advanced libraries such as Jax and XLA. Despite the project being inactive, its core algorithms are part of the maintained XLA, offering continuous benefits for model scaling.
Logo of lingvo
lingvo
Explore a framework tailored for building neural network models, focusing on sequence models within TensorFlow. From automatic speech recognition to advanced machine translation, learn about Lingvo's capabilities for constructing and refining AI models. With support for various TensorFlow versions, Lingvo provides guidance on installation, model execution, and is suitable for both beginners and seasoned developers. Access detailed documentation, community contributions, and a suite of tools designed to enhance your AI research and development.
Logo of Omega-AI
Omega-AI
Discover a robust deep learning framework built with Java, facilitating seamless neural network setup and model training. It includes GPU acceleration compatibility and supports a diverse range of models such as CNN, RNN, VGG16, ResNet, YOLO, LSTM, Transformer, and GPT2. Enhanced multi-threading performance and optimized for CUDA and CuDNN. This framework is a perfect match for Java developers and includes comprehensive guides for GPU configuration. Connect with the community for insights and contributions. Visit Omega-AI's repositories on Gitee and GitHub for more information.
Logo of torchmd-net
torchmd-net
TorchMD-NET provides neural network potentials enhanced for GPU molecular dynamics, supporting architectures like Equivariant Transformer, Graph Neural Network, and TensorNet. Installable via conda-forge or pip, it offers configuration flexibility through YAML or command line, supporting multi-node and GPU-specific settings, with comprehensive documentation to assist in molecular research applications.
Logo of tensorlayer-chinese
tensorlayer-chinese
The TensorLayer library, grounded in TensorFlow, offers extensive Chinese documentation and active community forums. Aimed at facilitating AI development, it equips researchers and engineers with diverse neural network tools. Engage with its dynamic user communities on platforms like QQ, WeChat, and Slack to collaborate and innovate in AI solutions.
Logo of ppq
ppq
This advanced framework facilitates neural network quantization across various hardware platforms by transforming floating-point operations into fixed-point, enhancing chip design efficiency. It offers customizable quantization processes compatible with TensorRT and OpenVINO. The 0.6.6 version introduces FP8 quantization, upgraded Python APIs, and sophisticated graph fusion, providing adaptable solutions for evolving AI applications.
Logo of OMLT
OMLT
OMLT is a Python package designed to incorporate machine learning models, including neural networks and gradient-boosted trees, into the Pyomo optimization framework. It supports diverse optimization formulations such as full-space, reduced-space, and MILP, and facilitates the import of Keras and ONNX models. Aimed at engineers and developers, it enhances optimization tasks through machine learning methodologies. Comprehensive documentation and illustrative Jupyter notebooks provide clear guidance, assisting users in implementing and expanding their optimization techniques efficiently. Discover OMLT for powerful surrogate models in engineering applications.
Logo of calculate-flops.pytorch
calculate-flops.pytorch
Calflops provides a complete tool for calculating theoretical FLOPs, MACs, and parameters in diverse neural networks such as CNNs, RNNs, and large language models. This tool offers efficient analysis of Pytorch-based models with detailed performance metrics for each submodule, facilitating a deeper understanding of performance costs. The tool's integration with Huggingface enhances usability by enabling computations without full model downloads. Drawing inspiration from libraries like ptflops, deepspeed, and hf accelerate, Calflops improves FLOPs calculations and supports Transformer models, making it a key asset for performance analysis and optimization.
Logo of pytorch-deep-learning
pytorch-deep-learning
Discover a comprehensive course focusing on PyTorch for deep learning, including the latest PyTorch 2.0 tutorial. This hands-on course emphasizes practical coding with sections covering neural network classification, computer vision, transfer learning, custom datasets, and model deployment. Through milestone projects such as FoodVision, gain practical experience and develop a portfolio. This beginner-friendly course uses Google Colab notebooks and video content for understanding deep learning fundamentals.
Logo of tiny-dnn
tiny-dnn
The project delivers a C++14 library designed for deep learning on IoT devices and embedded systems, excelling in environments with constrained resources. It achieves high-speed performance without GPU dependence, supports effortless integration, and accommodates various neural network architectures and activation functions. Noteworthy features include TBB or OpenMP support for parallel computation, Intel SSE/AVX enhancements, and straightforward Caffe model importation. Being header-only, the library ensures extensive portability and serves as an effective tool for learning neural networks.
Logo of xtts-webui
xtts-webui
XTTS-Webui provides a web interface for utilizing XTTS technology, enabling efficient voice synthesis through a portable version without complex installations. It offers batch dubbing, translation preservation, and neural enhancements, allowing model fine-tuning. The platform supports automated improvements and tools like RVC and OpenVoice. Users experience streamlined setup via Google Colab or manual methods, facilitating the production of high-quality audio outputs.
Logo of penzai
penzai
Penzai uses JAX to turn neural networks into easy-to-read pytree structures, ideal for post-training model exploration such as reverse-engineering and activation analysis. It includes tools like Treescope for visualization, JAX utilities for data manipulation, and a flexible neural network library. With its Transformer model implementations, Penzai aids in research on model interpretability and dynamics, while version 0.2 enhances workflow with new API features like mutable state management.
Logo of flops-counter.pytorch
flops-counter.pytorch
The tool offers an accurate calculation of multiply-add operations and parameters in neural networks through PyTorch and ATEN backends. It provides detailed per-layer cost analysis, especially effective when using the ATEN backend for comprehensive support including transformer models. Key features include per-module statistics, detailed operation logs, and module exclusion options, accommodating complex research requirements. Supporting convolutional layers, activations, RNNs, and transformer architectures, the tool serves researchers and developers in assessing neural network complexities.
Logo of sparsezoo
sparsezoo
Explore a rich repository of sparsified neural network models with the SparseZoo platform. Simplify and accelerate your deep learning projects by leveraging pre-built, inference-optimized models and customizable sparsification recipes. With SparseZoo's API-driven approach, you can easily integrate with existing networks or start from scratch, providing flexible solutions suitable for various applications. Enjoy the benefits of open-source community support and continuous updates that enhance model efficiency and performance. The collection of sparsified models and recipes effectively reduces time-to-value in deep learning projects.
Logo of keras-cv
keras-cv
KerasCV provides a collection of modular computer vision components that integrate effortlessly with TensorFlow, JAX, and PyTorch, leveraging Keras 3. It facilitates tasks such as data augmentation, object detection, and segmentation, helping engineers swiftly construct advanced training and inference pipelines. This library assures framework compatibility, allowing for reuse without expensive migration processes. It also encourages community contributions to further enhance numerical performance and expand computer vision applications.
Logo of tnlearn
tnlearn
Tnlearn is an open-source Python library designed to optimize neural networks using symbolic regression to create task-based neurons. It constructs neural networks with diverse neuron types for improved feature representation and task adaptation, inspired by human brain diversity. Key features include vectorized symbolic regression and learnable parameter functions. Compatible with Python 3.9+, Tnlearn is easily installed with pip or conda, aiding efficient machine learning model development.
Logo of tiny-cuda-nn
tiny-cuda-nn
Discover an efficient framework for training and querying neural networks, featuring a fast multi-layer perceptron and versatile hash encoding. Compatible with NVIDIA GPUs, it offers a C++/CUDA API and PyTorch extension, supporting various encodings, losses, and optimizers. Enhance neural network performance on RTX 3090 GPUs with its Python integration, and access useful utilities, performance benchmarks, and examples.