#Multilingual
IMS-Toucan
IMS Toucan is a leading toolkit for multilingual Text-to-Speech Synthesis, supporting over 7000 languages. Created at the Institute for Natural Language Processing, University of Stuttgart, it provides a quick and adjustable solution, functioning efficiently with minimal computing power. Free access through Hugging Face allows exploration of demos and use of a comprehensive multilingual TTS dataset. Easy-to-follow installation instructions are available for Linux, Windows, and Mac, ensuring versatility in training and inference, with the option of using pretrained models for enhanced efficiency.
YAYI2
This model, developed by Wenge Research, is a multilingual large language model utilizing over 2 trillion tokens in pre-training. It is optimized for general and specialized uses with millions of fine-tuning instructions and human feedback reinforcement learning to align with human values. The model offers enhancements in language understanding, reasoning, and code generation, exceeding the performance of similar-sized open-source models. Discover more through the detailed technical report and join the community in advancing the open-source pre-training model ecosystem with this 30B parameter innovation.
primeqa
PrimeQA is an open-source repository designed for the training and testing of advanced question answering models. It helps researchers replicate cutting-edge experiments from recent NLP conferences and supports diverse functionalities such as Information Retrieval using models like BM25 and ColBERT, Multilingual Machine Reading Comprehension, and multilingual Question Generation. Utilizing the Transformers toolkit allows for easy access to pre-trained models. PrimeQA also includes retrieval-augmented generation with GPT models and excels in multiple leaderboard challenges, making it a top resource for exploring multilingual functionality and domain adaptation.
BayLing
BayLing utilizes advanced language model capabilities to enhance cross-lingual communication by optimizing translation and instruction following. Suitable for deployment on consumer-grade GPUs, BayLing supports English and Chinese text creation, translation, and interaction. The latest version, BayLing-13B-v1.1, includes expanded Chinese linguistic knowledge, improving the evaluation and application of large language models in various scenarios. Try the online demo or local GUI for efficient cross-language translation and interaction.
fish-speech
Explore a robust text-to-speech system offering zero-shot and few-shot functionalities across languages like English, Japanese, and Chinese. The platform supports fast processing with a real-time factor of 1:5 on an Nvidia RTX 4060 and maintains low character and word error rates. Features include a Gradio-based web UI and a PyQt6 interface for easy cross-platform deployment on Windows, Linux, and macOS, enhanced by fish-tech acceleration.
openai-whisper-api
OpenAI Whisper API enhances speech-to-text conversion using state-of-the-art technology. This open-source microservice, employing Node.js and TypeScript, operates flawlessly on Docker without dependencies. It supports multilingual tasks, including language identification and translation, beneficial for varied applications like video call and YouTube transcriptions. Its straightforward usage makes it suitable for developers across skill levels, while the MIT license ensures easy integration.
Qwen2
Qwen2.5 offers developers unparalleled flexibility with multilingual and high-context support, significantly improving application performance across diverse deployment scenarios. Explore enhanced fine-tuning capabilities with detailed performance metrics to optimize your projects.
FunASR
The toolkit serves as a bridge between academic research and industry, facilitating tasks like synchronous ASR, VAD, punctuation restoration, and speaker verification. It supports the fine-tuning and inference of high-performance pre-trained models. The Model Zoo, featuring models such as Paraformer and Whisper-large, is accessible via ModelScope and Hugging Face, catering to multilingual requirements. Easy-to-use scripts and tutorials further simplify the deployment of speech recognition services, assisting developers in crafting tailored solutions.
vosk-api
Vosk is an open source speech recognition toolkit offering offline capabilities in over 20 languages. It is suitable for applications like chatbots, smart devices, and transcription services. The toolkit features compact models for efficient, zero-latency performance and supports multiple programming languages and platforms, ranging from Raspberry Pi to large clusters, making it versatile for various speech-driven tasks.
text2vec
Text2vec offers cutting-edge text-to-vector transformations with models like Word2Vec, BERT, and CoSENT, optimizing semantic similarity assessments. The platform is enhanced with multi-GPU inference capabilities and a CLI tool for bulk vectorization to ensure scalability. It regularly updates models specifically designed for Chinese and multilingual datasets, verified through rigorous testing. Text2vec is suitable for semantic matching, including sentence-to-sentence and sentence-to-paragraph analysis, providing notable enhancements in text understanding and efficiency.
wetts
WeTTS provides a comprehensive end-to-end text-to-speech toolkit designed for robust production use. It leverages advanced models like VITS and integrates WeTextProcessing for effective text normalization and prosody control. Supporting multiple open-source datasets such as Baker and AISHELL-3, WeTTS is compatible with a wide range of hardware including x86 and Android, offering developers a reliable solution for developing high-quality TTS applications.
speech_dataset
Explore a diverse collection of speech datasets in multiple languages, including Chinese, English, and Japanese, designed for speech recognition, synthesis, and speaker diarization. This collection supports various applications, such as speech commands and ASR system evaluation, facilitating advancements in speech technology. Notable datasets like Common Voice and LibriSpeech play a crucial role in enhancing machine learning models. This resource is invaluable for researchers seeking comprehensive audio data for developing speech-related solutions across different linguistic contexts.
xtts2-ui
The XTTS-2-UI project provides a straightforward interface for cloning voices in 16 languages using text and a brief audio sample. The model tts_models/multilingual/multi-dataset/xtts_v2 is automatically downloaded when first used, aiding in seamless voice cloning experiments. It supports both voice recording and uploading with a few setup steps. The application can operate via terminal or Streamlit, requiring agreement to the terms of service initially.
setfit
SetFit is an efficient framework that fine-tunes Sentence Transformers for few-shot learning without prompts, requiring minimal labeled data. It enhances training speed and supports multilingual tasks, offering a competitive edge with high accuracy using just eight samples per class. Seamlessly integrates with Hugging Face Hub for straightforward training and deployment across different text classifications.
subtitle
Discover an open-source, self-hosted tool for generating subtitles with AI precision and multilingual capabilities. Easily integrate it into workflows for diverse language content, enhancing privacy and control on your own servers. Modify the tool to meet specific needs while ensuring natural-sounding subtitles.
wtpsplit
The wtpsplit project facilitates efficient text segmentation into sentences or semantic units in 85 languages. Leveraging advanced SaT models, it enhances performance while lowering computational needs compared to earlier WtP models. Offering ONNX support for faster processing and LoRA modules for domain-specific or stylistic adjustments, it is well-suited for diverse uses including paragraph segmentation. Seamlessly compatible with platforms like HuggingFace, it is a valuable tool for academic and development settings seeking adaptable text segmentation solutions.
Feedback Email: [email protected]