Petals
Run large language models collaboratively by distributing layers across users.
Open SourceSelf HostedGPU Required (4GB+ VRAM)
0.0 (0)
About
Petals enables running LLMs collaboratively by distributing model layers across multiple users over the internet. Run 70B+ models by contributing and using shared GPU resources. Like BitTorrent for LLM inference. MIT license.
Reviews (0)
Leave a Review
No reviews yet. Be the first to review!
Details
- Category
- LLM Inference & Serving
- Price
- Free
- Platform
- Local/Desktop
- Difficulty
- Intermediate (3/5)
- License
- MIT
- Minimum VRAM
- 4 GB
- Added
- Apr 3, 2026
Similar Tools
Featured
Desktop application for discovering, downloading, and running local LLMs.
Self HostedOffline
Beginner
0.0 (0)
Open-source ChatGPT alternative that runs 100% offline on your computer.
Open SourceSelf HostedOffline
Beginner
0.0 (0)
Open-source ecosystem for running LLMs locally on consumer hardware.
Open SourceSelf HostedOffline
Beginner
0.0 (0)