Antihero
Platform Features Compliance Research Pricing
Documentation Actuarial Spec GitHub News Manifesto
Log In Get Started
Open Source · Apache 2.0 · ISO 13482 · EU AI Act

Make your robots
insurable

Declarative safety policies, sub-100µs enforcement, MuJoCo digital twin validation, and cryptographic audit trails — the infrastructure layer that lets insurers underwrite your fleet.

Get Started Free Read the Docs
$ pip install antihero
< 100µs Policy evaluation at 1kHz
520+ scenarios Adversarial certification
MuJoCo Digital twin validation
Ed25519 Signed audit trails
World ID ZK human authorization

Robots are shipping. Insurance isn't. Unitree shipping 20,000 humanoids in 2026. EU AI Act fully applicable August 2026. Figure AI's skull-fracture lawsuit. Nobody can insure what they can't measure. Antihero makes robot behavior measurable, auditable, and insurable. Read our manifesto →

We prove which human authorized it, what policy governed it, and that a human approved the specific action — all cryptographically signed.

You can't audit a neural net. You can audit every action it tries to take.

From uninsurable to underwritten

How it works

1

Robots execute physical actions autonomously, but nobody can verify safety or measure risk. Carriers can't underwrite what they can't observe.

2

Antihero enforces declarative safety policies at the control loop boundary and validates physical actions in a MuJoCo digital twin before they reach hardware. Sub-100µs. Fail-closed.

3

Signed evidence chains feed directly into carrier underwriting models. Robots become insurable. Premiums reflect measured risk.

ROBOT CARRIER ? POLICY ENGINE EVIDENCE CHAIN Ed25519 · RFC 8785 hash-chained INSURANCE From policy to premium

Identity. Enforcement. Approval.

The three-layer safety stack for Physical AI systems. Define behavioral constraints, enforce at runtime, prove compliance with cryptographic audit trails — all in a single SDK.

Identity
Principal Binding
Every robot action is cryptographically bound to a verified human principal. Not "which API key" — which person, proven via OAuth, passkey, or SAML. Delegation chains tracked and depth-limited by policy.
principal: human_id + verified_via delegation_depth: policy-enforced
Enforcement
Runtime Policy Engine
Declarative rules enforced at the action boundary. Every tool call, API request, and resource access is evaluated before execution. Fail-closed. Sub-millisecond. Deny dominates. Policy guard simulation tests changes before deployment — no regressions, ever.
effect: deny | allow | allow_with_requirements evaluation: < 1ms p99
Approval
Human Proof-of-Authorization
Zero-knowledge biometric verification via World ID. Every high-risk action requires cryptographic proof of human authorization — bound to the specific action hash. No replays. No identity exposure. TOTP, passkey, and webhook fallbacks for air-gapped environments.
kind: human_proof · World ID ZK + Ed25519 action_hash binding · non-repudiable

One integration.
Every robot insurable.

1

Connect your robots

Auto-detects your stack — ROS 2, LeRobot, MuJoCo, or Isaac Sim — and generates starter safety policies. Drop in one import or call the REST API.

2

Define your policies

Declarative YAML in your repo. Simulate changes with the policy guard before deploying — test coverage improvements without regressions.

3

Ship with evidence

Continuous certification runs 520+ scenarios, auto-generates policy suggestions from gaps, and escalates unresolved findings. Premiums drop as your evidence compounds.

Want the full picture? Explore the documentation

app.py
from antihero import Antihero client = Antihero(api_key="ah_...") # Evaluate before every tool call decision = client.evaluate( action="motion.arm.move", resource="workspace.zone_A", agent_id="warehouse-bot-01", context={ "velocity_mps": 0.4, "payload_kg": 8.5 } ) if decision.effect == "allow": result = execute_motion(plan) client.record(decision, outcome="success") else: log(f"Blocked: {decision.reason}")

Try the policy engine

See how Antihero evaluates a robot action in real time. No signup required.

Input
action:
resource: rm -rf /
robot_id: warehouse-bot-01
timestamp:
Output

120+ REST API endpoints  ·  Webhooks  ·  API Reference

Defense in depth for autonomous systems.

Digital Twin Validation
Simulate every action in a digital twin before physical execution. Physics-based validation, collision detection, and workspace boundary enforcement in parallel.
Fleet Management
Fleet-wide safety coordination, role-based access control, and graduated escalation for multi-robot systems across warehouse, healthcare, and field deployments.
Incident Response & Kill Switch
Quarantine robots instantly, block actuators, freeze operations. Emergency kill switch in one API call. Evidence bags preserve full forensic context.
Sim-Before-Execute
Every motion plan runs through physics simulation before reaching actuators. Collision prediction, force limit checks, and workspace boundary validation in real time.
Compliance Mapping
Seven built-in frameworks: SOC 2, HIPAA, EU AI Act, NIST AI RMF, NIST 800-53, FedRAMP, and EO 14110. Automated posture assessment and gap analysis.
Self-Healing Policies
Certification finds coverage gaps, auto-generates candidate deny/allow rules, and surfaces them for human review. Escalating alerts ensure nothing stays unresolved.
Proof of Human
Cryptographic proof that a verified human authorized every high-risk robot action. World ID zero-knowledge biometric verification, TOTP, webhook callbacks, and passkeys. Answers the three trust questions: who authorized it, what policy governed it, did a human approve it.
Learned Controller Enforcement
End-to-end neural nets are black boxes. Antihero is the safety layer between learned weights and physical action. Every action tensor checked in <100µs — Helix 2, GR00T, pi0, or any VLA model.
Self-Improving Agent Safety
Certified against recursive self-improvement attacks — policy tampering, audit trail manipulation, privilege escalation, and reward hacking. 20 adversarial scenarios ensuring agents cannot bypass external enforcement.
Behavioral Black Box
Like an aircraft's flight recorder. Every policy decision, every sensor reading, every deny event — recorded with cryptographic integrity at up to 100Hz. Tamper-evident Ed25519 signed hash chains. AHDS-2 compliant.
Data Sovereignty
Your robot sees everything. Antihero controls what it remembers. Perception enforcement, privacy zones, and data exfiltration prevention — GDPR and CCPA compliant. 30 certification scenarios for surveillance safety.
Hardware Certification
Certified on Jetson Thor at 130W. Different power envelopes, different behavioral guarantees. We test safety under compute pressure, thermal throttle, and battery drain.

Works with your stack.

Adapters for ROS 2, LeRobot, and MuJoCo, plus an MCP server with 8 policy tools and a Claude Code skill. Use Antihero from your terminal, your IDE, or your robot's control stack.

ROS 2
MuJoCo
LeRobot
Isaac Sim
GR00T
Helix 2
pi0
Foxglove
Rerun
ISO 10218
ISO/TS 15066
ISO 13482
Teleop
World ID
AHDS-2
Cybersecurity
Data Sovereignty
MCP Server
8 tools via antihero serve
Policy checking, certification, policy guard simulation, audit trail, and risk status — exposed as MCP tools. Works with Claude Desktop, Cursor, OpenCode, or any MCP client. Every call is policy-gated and audit-logged.
ROS 2 Adapter & Claude Code Skill
Native ROS 2 integration + 5 guided workflows
Drop-in ROS 2 node for runtime policy enforcement. Plus generate policies from natural language, certify robots against 520+ scenarios, investigate audit trails, simulate policy changes, and quick-check any action.
World ID Integration
Zero-knowledge human verification
Biometric proof-of-human via World ID’s zero-knowledge protocol. Every high-risk approval is cryptographically bound to the action hash — no identity exposure, no replays. Fallback to TOTP and passkeys for air-gapped deployments.

Python SDK · ROS 2 Adapter · REST API · CLI · MCP Server · Claude Code Skill

Built for the regulatory wave.

Antihero maps to every major AI governance framework. Not checkboxes — continuous, machine-readable evidence from real enforcement data. See the AHDS-1 actuarial data specification for the formal schema.

EU AI Act
Article 14 Compliance
Full coverage of all 5 human oversight requirements: understanding, monitoring, interpretation, override, and intervention. Enforcement begins August 2026.
NIST AI RMF
MAP · GOVERN · MEASURE
Automated mapping to NIST AI Risk Management Framework. Identity binding (MAP 1.5), policy governance (GOVERN 1.3), approval oversight (GOVERN 1.7), audit evidence (MEASURE 2.6).
Enterprise
SOC 2 · HIPAA · FedRAMP
One-click compliance exports with evidence provenance. Hash-chained audit trails satisfy CC6.1 (access control), CC7.2 (monitoring), and CC8.1 (change management).
Emerging standards:
IETF WIMSE (Agent Identity) · W3C Verifiable Credentials 2.0 · NIST AI Robotics Safety Initiative · ISO/IEC 42001 (AI Management)

Technical foundations.

Peer-reviewed architecture and formal specifications. Follow the latest updates for new publications.

Whitepaper · February 2026

Distributed Safety Architecture for Autonomous Robotics

Action-level safety, cryptographic accountability, and insurance infrastructure for the robotics era. Formalizes the three-layer stack, introduces TCE/PDE/AEE primitives, and presents the economic thesis for robot liability insurance.

Read PDF
arXiv Preprint · March 2026

Antihero: A Multi-Layered Runtime Enforcement Architecture for Autonomous Robot Safety

Systems paper: multi-engine threat detection, OS-level sandbox profiles, auto-remediation playbooks, community threat intelligence, and a queryable threat relationship graph. 1,916 tests, zero failures.

Read PDF
Data Specification AHDS-1 · March 2026

How to Underwrite Autonomous Robot Risk: A Data Specification

The first formal schema defining what enforcement data insurers need to price, bind, and settle autonomous robot liability. Covers risk factor computation, 7-layer fraud detection, compliance mapping, and machine-readable JSON Schema definitions.

Read Spec
Analysis · March 2026

You Can't Audit a Neural Net

Why end-to-end learned controllers (Helix 2, GR00T, pi0) are opaque to traditional safety auditing, and why external enforcement at the action boundary is the only viable safety strategy for neural-net-driven robots.

Read Analysis
Analysis · March 2026

Meta's Hyperagents Proves Why External Safety Enforcement Matters

Analysis of Meta's DGM-H self-improving foundation model agents and why internal guardrails fail when agents can recursively modify their own behavior, tools, and objectives. External enforcement is immune to self-modification.

Read Analysis

Start free. Scale as you grow.

Open-source SDK with a managed cloud for teams that need enforcement at scale.

Open Source
Free
Apache 2.0 forever
Policy engine + YAML rules
Hash-chained audit trails
Ed25519 signatures
CLI + Python SDK
Community support
View on GitHub
Startup
$29/mo
Per org, billed monthly
25K events/month
5 robots
Human approval gates
SOC 2 export
Email support
Get Started
Business
$99/mo
Per org, billed monthly
250K events/month
25 robots
Principal identity binding
$100K robot liability coverage
3 compliance frameworks
Priority support
Get Started
Enterprise
Custom
Annual contract
Unlimited events + robots
$1M+ robot liability coverage
All compliance frameworks
Dedicated instance
99.99% SLA
Dedicated support
Talk to Sales

EU AI Act enforcement starts August 2026.

The companies that ship insurable robots first will define the market. Start building your safety infrastructure today.