Tools/Model Training & Fine-Tuning/Lora (Low-Rank Adaptation)
Featured Tool

Lora (Low-Rank Adaptation)

Parameter-efficient fine-tuning technique that adapts large models with minimal trainable parameters.

Open SourceSelf HostedOffline CapableGPU Required (4GB+ VRAM)
0.0 (0)

About

LoRA (Low-Rank Adaptation) by Microsoft Research enables fine-tuning large language models by injecting trainable rank decomposition matrices into existing weights. Reduces trainable parameters by 10,000x while maintaining quality. MIT license.

Reviews (0)

Leave a Review

No reviews yet. Be the first to review!

Details

Price
Free
Platform
Local/Desktop
Difficulty
Intermediate (3/5)
License
MIT
Minimum VRAM
4 GB
Added
Apr 3, 2026

Similar Tools

Featured

Library for training LLMs with reinforcement learning (RLHF, DPO, PPO).

Open SourceSelf HostedOfflineGPU 8GB+
Intermediate
0.0 (0)
Featured

All-in-one framework for fine-tuning 100+ LLMs with web UI.

Open SourceSelf HostedOfflineGPU 8GB+
Easy
0.0 (0)

Low-code framework for building custom AI models by Predibase.

Open SourceSelf HostedOfflineGPU 8GB+
Easy
0.0 (0)