#optimization

Logo of learning-to-learn
learning-to-learn
This open-source project uses TensorFlow and Sonnet to enhance optimization with learning-to-learn strategies, focusing on command-line tools for efficient problem-solving, including MNIST and CIFAR10 classifications. It supports quadratic functions and employs advanced optimizers such as L2L. The flexible design allows easy integration of new problems and adjustment of parameters like learning rate and epochs, effectively showcasing TensorFlow's optimization capabilities across diverse challenges. Not officially affiliated with Google.
Logo of ai-hub-models
ai-hub-models
Qualcomm AI Hub Models offer machine learning solutions optimized for vision, speech, text, and generative AI applications on Qualcomm devices. Models are available through Hugging Face, with open-source deployment recipes and performance metrics across diverse Snapdragon devices. Compatible with Android, Windows, and Linux, these models support various precision levels and computing units, including CPU, GPU, and Hexagon DSP. Easily installable Python packages facilitate on-device and cloud-hosted deployments on different operating systems.
Logo of GeneticAlgorithmPython
GeneticAlgorithmPython
PyGAD is an open-source Python library designed for creating genetic algorithms to refine machine learning models. It integrates with Keras and PyTorch, supporting both single and multi-objective optimization. PyGAD offers flexible fitness functions and various crossover and mutation methods, making it a versatile solution for optimization tasks. Continuous development ensures frequent feature updates. Comprehensive documentation, including examples and case studies, aids users in leveraging its functionalities effectively.
Logo of OMLT
OMLT
OMLT is a Python package designed to incorporate machine learning models, including neural networks and gradient-boosted trees, into the Pyomo optimization framework. It supports diverse optimization formulations such as full-space, reduced-space, and MILP, and facilitates the import of Keras and ONNX models. Aimed at engineers and developers, it enhances optimization tasks through machine learning methodologies. Comprehensive documentation and illustrative Jupyter notebooks provide clear guidance, assisting users in implementing and expanding their optimization techniques efficiently. Discover OMLT for powerful surrogate models in engineering applications.
Logo of vtprotobuf
vtprotobuf
The vtprotobuf repository delivers the `protoc-gen-go-vtproto` plug-in, streamlining Protocol Buffers in Vitess with enhanced serialization and deserialization. Utilizing `gogo/protobuf` bases, it supports ProtoBuf APIv2, minimizing reflection and memory use. Featuring tailored functions for size, equality, and serialization, it integrates effortlessly with GRPC, Twirp, and Connect. Code generation with `buf` automates optimal performance for Protocol Buffers in applications.
Logo of ipex-llm
ipex-llm
Explore a library designed for accelerating LLMs on Intel CPUs, GPUs, and NPUs. Seamlessly integrating with frameworks such as transformers and vLLM, it optimizes over 70 models for better performance. Latest updates feature GraphRAG support on GPUs and comprehensive multimodal capabilities like StableDiffusion. With low-bit optimizations, it enhances processing efficiency on Intel hardware for large models. Discover new LLM finetuning and pipeline parallel inference advancements with ipex-llm.
Logo of dm_pix
dm_pix
PIX utilizes JAX to provide advanced image processing functions, promoting efficient optimization and parallelization. It integrates features such as jax.jit and jax.vmap, offering essential tools for machine learning tasks. Easily installed with pip, PIX ensures reliable performance in parallel tasks and includes a thorough testing suite. Contributions are welcomed to enhance its capabilities.
Logo of hidet
hidet
Hidet is an open-source deep learning compiler optimizing DNN models from PyTorch and ONNX to CUDA kernels, tailored for efficient inference on NVIDIA GPUs. It supports Linux with CUDA 11.6+ and Python 3.8+, applying graph and operator optimizations for enhanced performance. Comprehensive documentation and community engagement facilitate ongoing development, with straightforward installation and usage to integrate into workflows.
Logo of or-tools
or-tools
Google's OR-Tools is an open-source suite for combinatorial optimization with support for constraint programming, linear programming, and graph algorithms. Available in C++, Python, C#, and Java, it is compatible with major OS platforms and supports Makefile, Bazel, and CMake for diverse applications including the Traveling Salesman and Vehicle Routing Problems, enhanced by strong community support.
Logo of intel-extension-for-pytorch
intel-extension-for-pytorch
The Intel® Extension for PyTorch* enhances PyTorch with optimized features for improved performance on Intel hardware. Leveraging advanced instructions like Intel® AVX-512 and AI engines such as XMX, it supports CPU and GPU acceleration to maximize efficiency. Specific optimizations provide up to 30% performance improvement on Large Language Models (LLMs), starting from version 2.1.0, notably improving accuracy of renowned models including LLAMA and GPT. Additionally, module-level optimization APIs from release 2.3.0 offer enhanced alternatives for customized LLMs, ensuring continued advancements in Generative AI applications.
Logo of llama_ros
llama_ros
Discover how llama_ros enables the integration of llama.cpp's optimization features into ROS 2 projects. It supports GGUF-based LLMs, VLMs, real-time LoRA changes, and GBNF grammars, enhancing robotic applications. The repository includes detailed installation guides, Docker options, and usage examples. Enhance ROS 2 functionality with CUDA support and other tools offered by llama_ros, suitable for expanding project capabilities with LangChain and related demos.
Logo of CAGrad
CAGrad
CAGrad introduces a novel approach to multitask learning by utilizing conflict-averse gradient descent, which optimizes multiple objectives simultaneously. Recognized at NeurIPS 2021, this methodology reduces the computational burden in calculating task gradients, enhancing efficiency for varied applications. With the addition of FAMO, the tool further supports dynamic optimization without computing all task gradients. Experiments on NYU-v2, CityScapes, and Metaworld datasets illustrate its effectiveness in image-to-image prediction and reinforcement learning. This resource aids researchers in optimizing multitask objectives with minimal resource usage.
Logo of levanter
levanter
Explore a framework for training extensive language and foundation models with a focus on readability, scalability, and reproducibility. It's constructed with JAX, Equinox, and Haliax for distributed training on TPUs and GPUs. Enjoy effortless integration with Hugging Face tools and utilize advanced optimizers like Sophia and Optax. Levanter guarantees consistent results across various computing environments, with features like on-demand data preprocessing and robust logging capabilities. Perfect for developers pursuing efficient model development with top-tier benchmarks.
Logo of flamegraph
flamegraph
A versatile flamegraph generator that simplifies performance profiling across multiple platforms without extra dependencies. It supports both Rust and non-Rust projects with customizable profiling capabilities for deeper performance insights, ensuring compatibility with existing profiling tools and future expansion.
Logo of optimizer
optimizer
Learn about a C++ library that provides prepackaged optimization passes to enhance ONNX models without backend dependencies. Use this tool to apply existing optimizations or develop new ones easily. Available for installation through PyPI or building from source, the command-line API offers straightforward integration. Check out related tools such as onnx-simplifier for enhanced model efficiency.
Logo of nevergrad
nevergrad
Nevergrad offers a powerful Python solution for gradient-free optimization across bounded continuous and discrete variables, featuring the versatile NGOpt optimizer. This tool streamlines complex minimization tasks and integrates easily with Python, suiting diverse applications from research to practice. Comprehensive documentation and a supportive community enhance its practical use.
Logo of ml-pen-and-paper-exercises
ml-pen-and-paper-exercises
Discover a wide array of machine learning exercises focusing on linear algebra, graphical models, and inference methods, each complemented by detailed solutions. The topics include optimisation, factor graphs, hidden Markov models, and variational inference. Accessible as a compiled PDF on arXiv, this collection welcomes community input for enhancement. Perfect for enthusiasts of model-based learning and Monte-Carlo integration, offering in-depth comprehension through a pen-and-paper approach.
Logo of TinyLlama
TinyLlama
TinyLlama focuses on efficiently pretraining a 1.1 billion parameter language model across 3 trillion tokens in 90 days, sharing architectural similarities with Llama 2. Its compact design allows deployment on edge devices, supporting real-time tasks without internet dependency. As an adaptable solution for open-source projects, it offers essential updates and evaluation metrics, serving as a valuable resource for those interested in language models under 5 billion parameters. The project supports advanced distributed training capabilities alongside optimizations for increased processing efficiency.