Introduction to NNoM: Neural Network on Microcontroller
Neural Network on Microcontroller (NNoM) is an innovative high-level inference library designed to run neural networks on microcontrollers. This powerful tool allows developers to deploy complex Keras models seamlessly to microcontrollers with minimal performance loss, ensuring that small devices can execute sophisticated tasks efficiently.
Highlights of NNoM
- Easy Deployment: With just one line of code, users can transform a Keras model into an NNoM model, simplifying the implementation process.
- Support for Complex Structures: NNoM is versatile, accommodating advanced deep learning architectures like Inception, ResNet, and DenseNet.
- User-Friendly: Its easy-to-navigate interface makes it accessible for developers of all expertise levels.
- High Performance: NNoM provides multiple backend options, ensuring optimal performance across different platforms.
- Efficient Runtime: The library features onboard pre-compiling, which eliminates runtime interpreter performance loss.
- Inbuilt Evaluation Tools: Features like runtime analysis, a top-k accuracy checker, and a confusion matrix are available for users to validate their models effectively.
Latest Updates in Version 0.4.x
- Recurrent Layers: New additions to NNoM include recurrent layers such as Simple RNN, GRU, and LSTM, providing
stateful
andreturn_sequence
options. - Structured Interface: The newly introduced structured interface utilizes a C-structure for layer configuration, offering a machine-friendly approach alongside the human-friendly Layer API.
- Per-Channel Quantisation: This version supports per-channel quantisation for convolutional layers, enhancing flexibility in model design.
- New Scripts: With version 0.4.0, NNoM defaults to the structured interface scripts, making it easier to generate model headers.
Why Choose NNoM?
NNoM is specifically tailored for microcontrollers, addressing the challenge of deploying neural networks on small platforms where resources are limited. Unlike low-level libraries that complicate usage with complex architectures, NNoM streamlines the process, enabling users to manage the neural network's structure and memory with ease. It supports the creation of wider, deeper, and denser networks, crucial for maximizing performance on microcontroller units (MCUs).
Installation and Requirements
To integrate NNoM into a project, it can be installed via Python:
pip install git+https://github.com/majianjia/nnom@master
Additionally, NNoM is compatible with Tensorflow versions up to 2.14 and supports Python up to version 3.11.
Accessing and Utilizing NNoM
NNoM's source code, including headers and core files, is organized within the nnom_core
Python package. Users may access these files by adding relevant directories in their build system and compiling the necessary files.
Performance
Comparative studies highlight that NNoM offers competitive performance against other well-known frameworks like TensorFlow Lite and STM32Cube.AI. It provides rapid inference times and often utilizes less memory, suitable for resource-constrained environments.
Examples and Documentation
NNoM is accompanied by detailed documentation and examples, aiding users in exploring its potential through practical implementation. Resources available include a 5-minute guide, porting and optimization instructions, and several practical examples.
Improvements and Optimization
For those seeking enhanced performance, integrating CMSIS-NN/DSP optimized backend can result in a performance boost of up to 5 times the default C backend.
Conclusion
NNoM marks a leap forward for developers working with microcontrollers, offering a robust solution that simplifies deploying neural networks on limited-resource platforms without sacrificing performance or flexibility. Its seamless integration and comprehensive support make it an invaluable tool for pushing the frontier of what microcontrollers can achieve with AI technology.