Featured Tool
Hugging Face TRL
Library for training LLMs with reinforcement learning (RLHF, DPO, PPO).
Open SourceSelf HostedOffline CapableGPU Required (8GB+ VRAM)
0.0 (0)
About
TRL (Transformer Reinforcement Learning) by Hugging Face provides tools for training language models with RLHF, DPO, PPO, SFT, and reward modeling. Built on Transformers and PEFT. Supports DeepSpeed and FSDP. Apache 2.0 license.
Reviews (0)
Leave a Review
No reviews yet. Be the first to review!
Details
- Category
- Model Training & Fine-Tuning
- Price
- Free
- Platform
- Local/Desktop
- Difficulty
- Intermediate (3/5)
- License
- Apache-2.0
- Minimum VRAM
- 8 GB
- Added
- Apr 3, 2026
Similar Tools
Featured
All-in-one framework for fine-tuning 100+ LLMs with web UI.
Open SourceSelf HostedOfflineGPU 8GB+
Easy
0.0 (0)
Low-code framework for building custom AI models by Predibase.
Open SourceSelf HostedOfflineGPU 8GB+
Easy
0.0 (0)
No-code tool by Hugging Face for training ML models automatically.
Open SourceSelf HostedOfflineGPU 8GB+
Beginner
0.0 (0)