Live Demo (no installation needed)
🎮 https://drvl-demo.onrender.com/
(First load may take 10–30 seconds due to Render free-tier spin-up. Subsequent visits are instant.)
This folder contains a minimal, self-contained demonstration of the Distributed Runtime Verification Layer (DRVL) governing an AI agent.
The agent attempts database operations (READ, UPDATE, DELETE, DROP) while DRVL enforces deterministic runtime policies — executing allowed actions, blocking forbidden ones, escalating risky ones, or auto-deciding based on simple rules.
Real LLM Mode (bring your own key)
Toggle the switch on → paste your OpenAI API key. Demo currently supports OpenAI (GPT-4o)
Actions become truly non-deterministic (and occasionally risky — perfect to see DRVL govern real frontier-model behavior)
Warning: Using real LLM will consume your OpenAI tokens and may incur costs. The default simulation mode uses no tokens.
Your key is sent once to the server, never stored or logged — only used for your current session.
Policy Integrity & Attestation
Every governance decision includes:
Envelope hash — SHA-256 fingerprint of the proposed execution envelope (captures the exact action request before enforcement).
Policy hash — SHA-256 fingerprint of the active policy set (ensures the decision can be reproduced under the same rules).
Signature — HMAC-SHA256 signature over the event payload, providing cryptographic attestation of the enforcement decision.
The Execution Envelope wraps the action proposal early, creating a clear boundary between probabilistic model reasoning and deterministic system execution.
Only validated envelopes proceed to execution, producing verifiable enforcement events that record what was proposed, what policy evaluated it, and what decision was made.
Demo note (intentional mismatches):
For illustration purposes, ~15% of events are deliberately tampered with (e.g., corrupted signature or mismatched policy hash) to demonstrate integrity detection in the interface (red “✗ Tampered / Invalid” indicator).
In a real deployment, policy hashes would remain consistent across events evaluated under the same policy state. The verification checks exist to detect tampering, misconfiguration, or unauthorized modification.
Execution Envelope (Action Boundary)
Every proposed action is wrapped in a lightweight Execution Envelope before verification:
Captures the exact action + parameters at proposal time
Includes timestamp + nonce for freshness/replay protection
Computes a deterministic hash of the proposal
The envelope creates a clear separation between reasoning (probabilistic LLM/agent) and execution (deterministic enforcement).
Only verified, authorized envelopes proceed to tool/database execution — producing a traceable, hashable boundary object.
In the UI you’ll see:
Envelope Hash in the “Latest Governance Decision” panel
Envelope hash in live event lines (for integrity verification)
This pattern mirrors how real secure systems (capability-based OSes, zero-trust gateways, confidential compute runtimes) separate untrusted proposal from trusted execution.
Key Features
Agent Mode Toggle
Switch between:Simulated / random probabilistic agent (fast, free, predictable)
Real OpenAI LLM (your API key) — unpredictable, realistic frontier-model behavior
Deterministic Runtime Enforcement
Policies applied consistently:Allowed → execute
Forbidden → block
Escalatable → decide (auto or manual)
Escalation Handling (for DELETE)
When escalation is required:~35% auto-approved → executed immediately (green)
~35% auto-denied → blocked immediately (red)
~30% pending → wait for manual Approve / Deny via dashboard buttons
Real-Time Governance Dashboard
Manual or autonomous action triggers
Adjustable speed slider for autonomous mode
Execution / block / approved counters
Active policies view
Latest decision panel with detailed explanation
Live event stream with timestamps and color coding
Escalation queue with pending requests + Approve/Deny buttons
LLM mode status + error feedback
Policy hash + signature displayed on every event
Architecture
AI Agent (Simulated or Real LLM)
↓
DRVL Policy Engine ← deterministic rules + auto-decision
↓
Database Execution
↓
Event Bus
↓
Governance Dashboard (browser)
Running the Demo Locally
1. Install dependencies
pip install flask openai # openai required only for real LLM mode2. Start the server
python app.py3. Open in browser
http://localhost:10000
Example Governance Rules
Demo note: Escalation decisions (DELETE) are probabilistic for realism — auto-approved (~35%), auto-denied (~35%), or pending (~30%) for human-in-the-loop control.
Escalation Queue
Pending requests show Approve (green) and Deny (red) buttons
Auto-approved requests execute immediately (no queue entry)
Auto-denied requests are blocked immediately (red event)
Manual Approve → executes action (green)
Manual Deny → blocks action (red)
This illustrates automated + human governance working together over unpredictable (simulated or real LLM) AI behavior.
Folder Contents
demo/
├── app.py # Flask server + dashboard endpoints
├── agent.py # AI agent (simulated + real LLM support)
├── database.py # Dummy DB simulator
├── drvl.py # Policy verification engine
├── event_bus.py # Simple pub/sub for events
├── audit.py # Logging decisions
├── templates/
│ └── index.html # Real-time dashboard (HTML + JS)
└── drvl_events.log # Audit trail (appended on run)
Purpose
This prototype clearly shows:
“Deterministic enforcement controlling a probabilistic — or real frontier — AI agent.”
It demonstrates:
Runtime policy enforcement on non-deterministic behavior
Automatic + manual risk containment
Real-time observability, explainability, and human oversight
Optional integration with real LLMs (bring your own key)
Not production-ready — a focused, runnable illustration of DRVL-style governance for autonomous systems.
License
This work is licensed under the Creative Commons Attribution–NonCommercial 4.0 International License (CC BY-NC 4.0).
Commercial use, institutional embedding, or derivative advisory applications require explicit permission.


