Back to Aira

March 10, 2026 · 5 min read

How Banks Can Satisfy SR 11-7 with One API Call

What SR 11-7 Requires

The Federal Reserve's SR 11-7 (Guidance on Model Risk Management) is the gold standard for model governance in banking. It requires three things from any model used in consequential decisions:

  • Model validation — independent review of model performance, limitations, and assumptions.
  • Ongoing monitoring — continuous tracking of model behavior against expected outcomes.
  • Documentation — comprehensive records of model development, validation, and use, accessible to auditors.

When banks started using AI models for credit decisions, fraud detection, and risk assessment, SR 11-7 didn't go away — it applied to those models too.

The Current State: Manual, Expensive, Slow

Most banks satisfy SR 11-7 through manual model validation exercises that take 3–6 months per model. A team of quantitative analysts builds challenger models, runs backtests, writes hundred-page validation reports, and presents findings to a model risk committee.

For traditional statistical models updated annually, this cadence was manageable. For AI models that update weekly or monthly, it's impossible. Banks are stuck choosing between compliance (slow, manual validation) and capability (fast, AI-driven decisions).

Aira Automates the Core Requirements

Aira doesn't replace your model risk management team. It automates the validation and documentation layer that SR 11-7 demands for every AI decision:

  • Independent validation — every decision is evaluated by multiple independent AI models from different providers, satisfying the "effective challenge" requirement.
  • Continuous monitoring with drift detection — consensus scores and disagreement rates are tracked over time. Aira computes KL divergence against established baselines and fires alerts when model behavior drifts — exactly the ongoing monitoring SR 11-7 requires.
  • Audit-ready documentation — every evaluation produces a cryptographically signed receipt with individual model responses, confidence scores, consensus calculation, and timestamps. Compliance bundles aggregate these receipts into Merkle-rooted evidence packages ready for EU AI Act Article 12 and ISO 42001 examinations.

Example: Loan Approval Workflow

A bank uses an AI model to pre-screen loan applications. Here's how Aira integrates:

from aira import Aira

aira = Aira(api_key="aira_live_xxx")

# Step 1: Authorize — policy engine + consensus evaluate the action
auth = aira.authorize(
    action_type="loan_decision",
    details="Pre-screen recommends approval. Confidence: 0.87. Amount: €180K.",
    agent_id="lending-agent",
    model_id="claude-sonnet-4-6",
)

# Consensus result (3 models):
#   Claude Sonnet: APPROVE (0.91)
#   GPT-5.2:       APPROVE (0.88)
#   Gemma 4 31B:   REVIEW  (0.62)
#
# Disagreement score exceeds threshold → auth.status == "pending_approval"
# Loan held for human underwriter review.

# Step 2: After human approval and loan disbursement, notarize
receipt = aira.notarize(
    action_uuid=auth.action_uuid,
    outcome="completed",
    outcome_details="Loan approved by underwriter J. Smith, disbursed €180K.",
)

The entire process takes under 3 seconds. The audit receipt is admissible documentation for SR 11-7 examinations.

What Examiners See

When OCC or Fed examiners review your AI lending program, they want to see evidence that models are independently validated and that there is a documented process for handling model disagreements. Aira provides both — automatically, for every decision, with cryptographic proof of integrity.

Instead of a 200-page validation report produced once a year, examiners see continuous, per-decision validation records that demonstrate ongoing model risk management. Every receipt is independently verifiable at /verify/action/{id} — no authentication, no vendor API call required.

Get Started

If your bank uses AI for any consequential decision, SR 11-7 compliance is non-negotiable. Aira lets you satisfy the core requirements programmatically, without slowing down your AI capabilities.

Talk to our team to see how Aira fits into your model risk management framework.