Guardrails AI

Framework for adding validation and safety guardrails to LLM outputs.

Open SourceSelf Hosted
0.0 (0)

About

Guardrails AI provides validators for LLM outputs including format checking, PII detection, toxicity filtering, and factual consistency. Composable guard pipelines. Works with any LLM. Apache 2.0 license.

Reviews (0)

Leave a Review

No reviews yet. Be the first to review!

Details

Price
Free
Platform
Local/Desktop
Difficulty
Easy (2/5)
License
Apache-2.0
Added
Apr 3, 2026

Similar Tools

LLM integration SDK by Microsoft for C#, Python, and Java applications.

Open SourceSelf Hosted
Intermediate
0.0 (0)

Framework by Stanford for programming with foundation models using optimized prompts.

Open SourceSelf Hosted
Advanced
0.0 (0)
Featured

Open-source platform for building LLM applications with visual workflow editor.

Open SourceSelf Hosted
Easy
0.0 (0)