StyleTTS 2

Style diffusion and adversarial training for human-level TTS with style transfer.

Open SourceSelf HostedOffline CapableGPU Required (6GB+ VRAM)
0.0 (0)

About

StyleTTS 2 is a text-to-speech model that uses style diffusion and adversarial training to achieve human-level speech synthesis quality. Supports style transfer, allowing control over speaking style via reference audio. Developed by Columbia University researchers. Requires GPU with 6+ GB VRAM. MIT license.

Reviews (0)

Leave a Review

No reviews yet. Be the first to review!

Details

Price
Free
Platform
Local/Desktop
Difficulty
Advanced (4/5)
License
MIT
Minimum VRAM
6 GB
Added
Apr 3, 2026

Similar Tools

Featured

Transformer-based text-to-audio model by Suno that generates speech, music, and sound effects.

Open SourceSelf HostedOfflineGPU 4GB+
Intermediate
0.0 (0)

Deep learning TTS library by Mozilla with Tacotron and WaveRNN implementations.

Open SourceSelf HostedOfflineGPU 4GB+
Intermediate
0.0 (0)
Featured

Lightweight and expressive TTS model with 82M parameters for fast local inference.

Open SourceSelf HostedOffline
Easy
0.0 (0)