#ASR

Logo of Streamer-Sales
Streamer-Sales
Streamer-Sales uses an AI-powered model for enhanced product sales via livestream presentations. It generates tailored narratives that highlight product features, improving purchase intent. Features include instant copywriting, accelerated inference, voice-to-text, digital human creation, and real-time interaction. Seamlessly integrates with ASR, TTS, and RAG technologies, optimizing sales for live or offline environments.
Logo of whisper.rn
whisper.rn
Whisper.rn seamlessly integrates OpenAI's Whisper ASR model into React Native apps. The library supports various platforms and offers high-performance recognition through whisper.cpp. Key features include multi-platform support, detailed installation instructions for iOS and Android, and real-time transcription capabilities. It includes microphone permissions, audio session management, and iOS Core ML support, providing a comprehensive solution for enhancing application speech recognition.
Logo of stable-ts
stable-ts
The stable-ts library enhances the functionality of Whisper by providing reliable timestamps in audio transcription. Key integrations include voice isolation, noise reduction, and dynamic time warping to achieve precise word-level timestamps. It supports diverse model configurations and preprocessing methods, improving transcription accuracy by suppressing silence and refining outputs. The library requires FFmpeg and PyTorch for installation, offering cross-platform compatibility and customizable options via denoisers and voice detection methods. Additionally, it connects with any ASR system, enabling its application in various audio transcription scenarios.
Logo of wav2letter
wav2letter
Explore the next step in speech recognition development with wav2letter's integration into Flashlight ASR. Access pivotal pre-consolidation resources and use detailed recipes for reproducing significant research models like ConvNets and sequence-to-sequence architectures. Utilize data preparation tools, confirm reproducibility with Flashlight 0.3.2, and connect with the dynamic wav2letter community. This MIT-licensed project offers innovative solutions for both supervised and semi-supervised speech recognition.
Logo of StreamSpeech
StreamSpeech
StreamSpeech is a distinctive tool in speech translation, proficient in offline and simultaneous scenarios. It employs a unified model that efficiently combines streaming ASR, speech-to-text, and speech-to-speech translation. Offering real-time intermediate outputs, it ensures low-latency communication. Notably, StreamSpeech is user-friendly with support for eight tasks, accompanied by a web GUI demo for practical experience. Designed for developers aiming to leverage cutting-edge performance in real-time audio processing.
Logo of pykaldi
pykaldi
PyKaldi acts as a bridge to smoothly integrate Kaldi's robust capabilities into Python. It offers comprehensive wrappers for C++ code from Kaldi and OpenFst, tailored for the speech recognition community. PyKaldi facilitates complex processes like manipulating Kaldi objects and using low-level functions without requiring extensive C++ knowledge. It includes high-level ASR, alignment, and segmentation modules to boost effectiveness for Python developers. Its NumPy integration ensures efficient data manipulation, backed by a modular design for easy maintenance and scalability. PyKaldi effectively extends Python's reach in ASR projects, enhancing synergy between Python and Kaldi.
Logo of awesome-audio-plaza
awesome-audio-plaza
This resource provides daily updates on audio-related papers, projects, and tools from leading platforms such as arxiv and GitHub. It covers various topics, including ASR and music generation, offering valuable insights into current trends and innovations in audio technology.
Logo of parrots
parrots
Parrots provides an efficient solution for Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) with multilingual support in Chinese, English, and Japanese. Utilizing models like 'distilwhisper' and 'GPT-SoVITS', this toolkit facilitates seamless voice recognition and synthesis. It supports straightforward installation, command-line operations, and integration with platforms like Hugging Face, ideal for applications necessitating advanced voice interaction.
Logo of espnet_onnx
espnet_onnx
The library simplifies the process of exporting, quantizing, and optimizing ESPnet models to the ONNX format independently of PyTorch. It facilitates ASR and TTS demonstrations on Google Colab utilizing pre-existing models without additional requirements. Capable of handling both pretrained and custom models, it provides detailed configuration options and supports GPU inference to enhance processing speeds. It also offers extensive installation and deployment guidelines to assist developers in integrating the library across multiple environments effectively.
Logo of mrcp-plugin-with-freeswitch
mrcp-plugin-with-freeswitch
This open-source project connects FreeSWITCH and UniMRCP Server to foster effective voice recognition and synthesis with the xfyun platform. It outlines the process for establishing an end-to-end voice call center by converting incoming calls into text via ASR and answering with synthesized speech according to tailored logic. Comprehensive setup steps, error management strategies, and configuration details are included to improve operational efficiency across different environments.
Logo of Maix-Speech
Maix-Speech
Maix Speech is an optimized AI speech library that operates on both embedded devices and PCs, offering functionalities like Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) exclusively for Chinese language. It facilitates voice interaction seamlessly across different platforms like x86x64 and R329. This library is licensed under Apache 2.0 and provides developers with the resources needed to incorporate speech functions into their own systems, with detailed instructions accessible on its GitHub repository.