Platform Documentation

Everything you need to get the most out of TruthVouch — from first setup through advanced configuration and API integration.

Product Documentation

Deep-dive documentation for every TruthVouch product.

Hallucination Shield

Monitor AI outputs for hallucinations in real time. Configure cross-checks, set truth nuggets, and receive alerts when AI contradicts your verified facts.

View docs ↗

AI Advisor

AI strategy and compliance assessment. Get a personalised AI maturity score, gap analysis, and prioritised action plan for your organisation.

View docs ↗

Compliance AI

Automate EU AI Act, SOC 2, and ISO 42001 compliance. Map obligations, run assessments, and generate audit-ready evidence packages.

View docs ↗

Brand Intelligence

Monitor how LLMs represent your brand. Detect inaccurate brand mentions across AI outputs and protect your reputation in the AI era.

View docs ↗

Content Certification

Certify AI-generated content as factual. Issue tamper-proof certificates with trust scores so readers can verify AI content authenticity.

View docs ↗

Trust API

Embed trust verification into any application. REST and gRPC endpoints for real-time hallucination detection, content certification, and truth scoring.

API Reference ↗

AI Governance

Policy engine and enforcement for AI. Define allowed models, set usage policies, enforce guardrails, and get audit trails for every AI decision.

View docs ↗

Key Concepts

Understanding these concepts will help you get the most from TruthVouch.

Truth Nuggets

Verified facts stored in your knowledge base that TruthVouch uses as the ground truth when evaluating AI outputs for hallucinations.

Hallucination Detection

The process of comparing AI-generated content against your truth nuggets to identify factual errors, contradictions, or unsupported claims. Uses AI-powered verification across 9+ models.

Trust Score

A 0–100 score that reflects the factual accuracy and reliability of an AI model or piece of content, updated in real time as new outputs are evaluated.

Content Certification

A tamper-proof cryptographic certificate that binds a piece of AI-generated content to a trust score and verification timestamp.

Neural Cache

A two-layer cache (Redis L1 + pgvector L2) that stores evaluation results semantically, reducing redundant LLM calls and cutting latency.

Compliance Mapping

The automated process of mapping your AI systems against regulatory frameworks like EU AI Act, SOC 2, and ISO 42001 to identify gaps and obligations.

Brand AVS

Answer Verification System — the mechanism that checks whether AI search engine responses about your brand match your official brand truth knowledge base. Covers ChatGPT Search, Perplexity, Gemini Web, Google AI Overview, Copilot, Grok, and Google AI Mode.

GEO Optimisation

Generative Engine Optimisation — improving how AI models represent your brand and content in AI-generated responses, analogous to SEO for search engines.

Cross-Check Jobs

Scheduled or on-demand evaluations that run your truth nuggets against specific AI models to proactively surface hallucinations before they reach users. Supports up to 1,000 scheduled or on-demand checks per day.

Can't find what you need?

Book a walkthrough with our team — we'll answer your questions and show you the exact features you're looking for.

Book a Walkthrough

Not sure where to start? Take our free AI Maturity Assessment

Get your personalized report in 5 minutes — no credit card required