XTTS Streaming Server
The XTTS Streaming Server is a technological solution aimed at demonstrating streaming audio processing capabilities. However, it's important to note that this server is designed as a demo and is not suitable for handling concurrent streaming requests or for use in production environments.
Running the Server
The setup of the XTTS Streaming Server can be accomplished in several ways, depending on the hardware and specific requirements.
Using a Pre-built Image
The easiest way to get the server up and running is by using a Docker container. Here are the different options available:
-
CUDA 12.1: This is recommended for users with the appropriate graphics cards to leverage CUDA capabilities for enhanced performance.
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121
-
CUDA 11.8: This option caters to older graphics cards.
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest
-
CPU: Using the CPU is not recommended due to performance limitations, but it is an option if GPU resources are unavailable.
$ docker run -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cpu
For users interested in deploying with a fine-tuned model, there is flexibility for using custom models as well. The model folder must contain specific files: config.json
, model.pth
, and vocab.json
.
$ docker run -v /path/to/model/folder:/app/tts_models --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest
Before running the server, it's essential to agree to the terms of the CPML license, which is indicated by setting the COQUI_TOS_AGREED
environment variable to 1
.
Building the Image Yourself
Advanced users may prefer to build the Docker image themselves. This provides greater control over the configuration and dependencies.
-
Clone the repository:
$ git clone [email protected]:coqui-ai/xtts-streaming-server.git
-
Navigate to the server directory and build the Docker image using the desired Dockerfile.
$ cd xtts-streaming-server/server $ docker build -t xtts-stream . -f DOCKERFILE $ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream
Testing the Running Server
After setting up the server, it's crucial to verify that it operates correctly.
-
Clone the Repository:
If not already cloned, obtain the repository.
$ git clone [email protected]:coqui-ai/xtts-streaming-server.git
-
Using the Gradio Demo:
This method allows users to run a simple user interface for testing the server.
$ cd xtts-streaming-server $ python -m pip install -r test/requirements.txt $ python demo.py
-
Using the Test Script:
For those who prefer a more traditional testing script.
$ cd xtts-streaming-server/test $ python -m pip install -r requirements.txt $ python test_streaming.py
Through these methods, users can ensure the proper functionality of the XTTS Streaming Server and explore its capabilities within a controlled, non-production environment.