Featured Tool
CLIP
Contrastive language-image pre-training model by OpenAI for zero-shot visual classification.
Open SourceSelf HostedOffline CapableGPU Required (4GB+ VRAM)
0.0 (0)
About
CLIP (Contrastive Language-Image Pre-Training) by OpenAI learns visual concepts from natural language descriptions. Enables zero-shot image classification, image-text similarity, and visual search without task-specific training. Foundation for many downstream tools. MIT license.
Reviews (0)
Leave a Review
No reviews yet. Be the first to review!
Details
- Price
- Free
- Platform
- Local/Desktop
- Difficulty
- Intermediate (3/5)
- License
- MIT
- Minimum VRAM
- 4 GB
- Added
- Apr 3, 2026
Similar Tools
Featured
State-of-the-art real-time object detection supporting YOLOv5 through v11.
Open SourceSelf HostedOffline
Easy
0.0 (0)
Open-vocabulary real-time object detection using YOLO with text prompts.
Open SourceSelf HostedOfflineGPU 4GB+
Intermediate
0.0 (0)
Featured
Open-set object detection combining DINO with grounded pre-training.
Open SourceSelf HostedOfflineGPU 4GB+
Intermediate
0.0 (0)