#TensorFlow Lite

Logo of ChineseTtsTflite
ChineseTtsTflite
An offline text-to-speech solution utilizing Kotlin, JetPack Compose, and Tensorflow Lite. Features FastSpeech for rapid audio generation on mid-range devices and Tacotron for superior output needing higher performance. Includes detailed model download directions and optimized Tensorflow Lite size for mobile applications.
Logo of ML-examples
ML-examples
Discover diverse machine learning tutorials showcasing Arm NN SDK and other technologies for deploying models on Android and platforms like Raspberry Pi. Projects cover neural style transfer, gesture recognition, and more, with access to CMSIS-NN and TensorFlow guides on Arm Corstone. GitHub hosts extensive source code for skill enhancement.
Logo of Android-TensorFlow-Lite-Example
Android-TensorFlow-Lite-Example
The Android TensorFlow Lite Example illustrates how to integrate TensorFlow Lite into Android apps, utilizing the device camera for object detection tasks. This project serves as an essential guide for developers seeking to implement machine learning capabilities effortlessly on mobile platforms. By embedding TensorFlow Lite models, developers can enhance their applications with advanced AI features, ensuring efficient performance on mobile devices. This example is an invaluable tool for both novice and experienced developers interested in AI-driven solutions in Android development.
Logo of ModelAssistant
ModelAssistant
Discover how to deploy state-of-the-art AI algorithms on economical hardware using the open-source platform from Seeed Studio. ModelAssistant helps developers train and visualize AI models efficiently on microcontrollers and SBCs while optimizing performance and energy consumption. Address practical applications like anomaly detection and computer vision with support for formats such as TensorFlow Lite and ONNX. Stay informed about updates including YOLO-World and MobileNetV4 for embedded devices. Easily integrate AI with SSCMA using pre-trained models and intuitive tools.
Logo of ai-edge-torch
ai-edge-torch
AI Edge Torch simplifies the conversion of PyTorch models to .tflite format for on-device use with TensorFlow Lite and MediaPipe, enhancing Android, iOS, and IoT applications. The library supports CPU, and initially, GPU and NPU, aiding the deployment of Large Language Models and transformer models. Its PyTorch Converter streamlines TFLite conversion, while the Generative API supports mobile-optimized transformer creation. These tools improve model performance and MediaPipe integration, facilitating comprehensive app development. Discover more about AI Edge Torch's evolving features on GitHub.
Logo of CFU-Playground
CFU-Playground
The project provides a framework to enhance FPGA-based soft processors, specifically targeting improvements in machine learning performance. By simplifying infrastructure complexities, users can concentrate on developing processor instructions to speed up computations. It ensures fast collaborative iteration on processor enhancements and offers a comprehensive guide from selecting a TensorFlow Lite model to conducting simulations using Renode or Verilator. While designed for use with hardware like the Arty FPGA board, it allows simulation without physical devices. Utilizing mainly open-source tools, except for Vivado, it is suitable for engineers, interns, and students exploring machine learning processor advancements.
Logo of whisper_android
whisper_android
This guide details how to incorporate Whisper and the Recorder class into Android apps for effective offline speech recognition. It includes setup methods using TensorFlow Lite, practical code examples for Whisper initialization, and audio recording integration for efficient speech-to-text functionality. The tutorial covers key aspects such as setting file paths, managing permissions, and ensuring accurate transcription, thus enhancing Android app capabilities with reliable offline speech recognition.
Logo of react-native-fast-tflite
react-native-fast-tflite
Provides efficient TensorFlow Lite performance on React Native through JSI and zero-copy ArrayBuffers. It facilitates GPU-accelerated delegates such as CoreML and Metal, with dynamic runtime model swapping. Supports integration with VisionCamera for advanced imaging, optimizing AI model deployment across iOS and Android. Utilizes low-level C/C++ TensorFlow Lite API for direct memory access to enhance machine learning model execution.
Logo of PINTO_model_zoo
PINTO_model_zoo
Discover a repository that facilitates effortless inter-conversion of AI models among TensorFlow, PyTorch, ONNX, and other significant frameworks. With support for diverse quantization methods and optimization processes, this project enhances model performance across platforms like EdgeTPU and CoreML. It encourages community contributions for sample codes while keeping you informed on the progress in model conversion techniques for streamlined deployment.
Logo of flutter-tflite
flutter-tflite
The TensorFlow Lite Flutter plugin is designed to integrate machine learning capabilities seamlessly into Flutter applications. It offers efficient inference for Android and iOS by utilizing TensorFlow Lite's API and supports acceleration with NNAPI and GPU delegates. The plugin's structure is consistent with TensorFlow Lite Java and Swift APIs, ensuring smooth integration and low-latency performance. Contributions are encouraged to meet evolving standards and improve support for the Flutter community in machine learning development.