AWQ (Activation-aware Weight Quantization)
Efficient LLM quantization preserving important weight channels.
Open SourceSelf HostedOffline CapableGPU Required (8GB+ VRAM)
0.0 (0)
About
AWQ (Activation-aware Weight Quantization) by MIT HAN Lab quantizes LLMs by protecting salient weight channels based on activation magnitudes. Achieves better quality than naive quantization at same bit-width. MIT license.
Reviews (0)
Leave a Review
No reviews yet. Be the first to review!
Details
- Category
- Model Training & Fine-Tuning
- Price
- Free
- Platform
- Local/Desktop
- Difficulty
- Intermediate (3/5)
- License
- MIT
- Minimum VRAM
- 8 GB
- Added
- Apr 3, 2026
Similar Tools
Featured
Library for training LLMs with reinforcement learning (RLHF, DPO, PPO).
Open SourceSelf HostedOfflineGPU 8GB+
Intermediate
0.0 (0)
Featured
All-in-one framework for fine-tuning 100+ LLMs with web UI.
Open SourceSelf HostedOfflineGPU 8GB+
Easy
0.0 (0)
Low-code framework for building custom AI models by Predibase.
Open SourceSelf HostedOfflineGPU 8GB+
Easy
0.0 (0)