Aira

Interactive demo

Every AI decision,
governed and proven

Aira is the governance layer for AI agents. Multi-model consensus, policy enforcement, human approval, and cryptographic proof — in one API call.

Try it live

Pick a use case and watch the full pipeline — data in, models evaluate, policies check, humans approve, receipt minted. These are examples. The infrastructure works with any agent.

Decision pipelines

AI assistant

The problem

Which model decided?

You don't know. There's no record of which model ran, what version, or what it saw.

Was the decision correct?

No second opinion. A single model hallucinating means a wrong decision ships.

Did a human review it?

High-stakes decisions go straight through. No approval gate, no audit.

Can you prove it later?

No cryptographic proof. If a regulator asks, you have logs — maybe. Not evidence.

What Aira does

Multi-model consensus

Fan out to 2–5 models. Score agreement. Flag disagreement. No single point of failure.

Policy engine

Three modes — deterministic rules, AI evaluation, or consensus. Auto-enforce on every decision.

Human-in-the-loop

When policies trigger, approvers review via email or dashboard. Approve or deny with full context.

Cryptographic receipts

Ed25519 signatures + RFC 3161 timestamps. Tamper-proof, independently verifiable proof.

Ask Aira

Natural language interface for your governance data. Query cases, policies, agents — conversationally.

Full audit trail

Every data point, model output, policy evaluation, approval, and receipt — linked and logged.

Seven steps, one API call

Your agent calls aira.evaluate() once. Everything else happens automatically.

1
Data inYour agent sends case details — any structure, any domain.
2
AI evaluationPrimary model analyzes the case. Returns decision, confidence, reasoning.
3
Policy checkRules engine evaluates conditions. Triggers actions: approve, deny, escalate.
4
ConsensusMultiple models evaluate independently. Agreement is scored. Disagreement is flagged.
5
Human approvalApprovers review when triggered. Email links or dashboard. Approve or deny.
6
ReceiptDecision signed with Ed25519. Timestamped via RFC 3161. Immutable.
7
Audit trailFull lineage recorded — data, models, policies, approvals, proof.

Architecture

Aira sits between your agents and your AI providers. It doesn't replace your models — it governs them.

┌─────────────────────────────────────────────────────────┐
│  Your Agent                                             │
│  (lending, claims, KYC, compliance, content mod, ...)   │
└───────────────────────┬─────────────────────────────────┘
                        │  aira.evaluate(details, models)
                        ▼
┌─────────────────────────────────────────────────────────┐
│  Aira Governance Engine                                 │
│                                                         │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐              │
│  │ Claude   │  │ GPT      │  │ Gemini   │  ... N models │
│  │ Sonnet 4 │  │ 5.2      │  │ 3.1 Pro  │              │
│  └────┬─────┘  └────┬─────┘  └────┬─────┘              │
│       └──────────────┼─────────────┘                    │
│                      ▼                                  │
│              ┌───────────────┐                          │
│              │   Consensus   │  score agreement         │
│              └───────┬───────┘                          │
│                      ▼                                  │
│              ┌───────────────┐                          │
│              │ Policy Engine │  rules / AI / consensus  │
│              └───────┬───────┘                          │
│                      ▼                                  │
│              ┌───────────────┐                          │
│              │Human Approval │  email + dashboard       │
│              └───────┬───────┘                          │
│                      ▼                                  │
│              ┌───────────────┐                          │
│              │   Receipt     │  Ed25519 + RFC 3161      │
│              └───────────────┘                          │
└─────────────────────────────────────────────────────────┘
                        │
                        ▼
            Decision + Proof returned to agent

One integration, any agent

Python SDK, TypeScript SDK, or raw HTTP. Three lines of code.

Python

from aira import Aira

aira = Aira(api_key="...")

result = aira.evaluate(
    details="Loan application...",
    models=["claude-sonnet-4",
            "gpt-5.2",
            "gemini-3.1-pro"],
)

print(result.consensus.decision)
print(result.receipt.signature)

TypeScript

import { Aira } from "@airaproof/sdk"

const aira = new Aira({ apiKey: "..." })

const result = await aira.evaluate({
  details: "Loan application...",
  models: ["claude-sonnet-4",
           "gpt-5.2",
           "gemini-3.1-pro"],
})

console.log(result.consensus.decision)
console.log(result.receipt.signature)

Built for compliance

EU AI Act — Article 14

Human oversight for high-risk AI. Aira enforces it with policy-driven approval gates and full audit trails.

Tamper-proof evidence

Cryptographic receipts are independently verifiable. Ed25519 signatures can't be forged. RFC 3161 timestamps can't be backdated.

Model accountability

Every receipt records which models ran, which versions, what they decided, and how they disagreed. Full attribution.

Decision provenance

The complete chain from input data to final decision is hash-linked. If anything was altered, the chain breaks.

Who it's for

AI teams

Ship agents faster. Governance is one SDK call, not months of custom infrastructure.

Compliance teams

Audit any AI decision after the fact. Cryptographic proof, not screenshots.

Regulated industries

Finance, insurance, healthcare, legal — anywhere AI decisions carry real consequences.

Aira

AI governance infrastructure

Softure · Berlin