Public Review Draft v0.9 – AI Traceability, Auditability, and Autonomy Governance
Public Review Draft v0.9 (Primer is versionless; the specification carries the version)
ATAL (AI Traceability & Accountability Ledger) is a vendor-neutral, implementation-independent standard designed to ensure that all AI systems – whether human-triggered or autonomy-capable – operate with verifiable traceability, auditability, and accountability.
ATAL defines what must be recorded, how evidence must be structured, and which governance controls must be applied, so that regulators, auditors, enterprises, and courts can rely on a tamper-evident trail of AI decisions.
The Primer provides a concise, human-understandable overview of the full ATAL standard.
Any AI system capable of initiating or escalating actions without direct human instruction MUST be governed by an external, independent accountability layer that can observe, restrict, pause, override, or terminate those actions.
This principle shapes the entire ATAL architecture and is embedded throughout the nine Parts of the specification.
The ATAL specification consists of nine structured Parts, each addressing a central element of AI accountability:
PART I – Human-Initiated AI Governance
Defines how human requests, instructions, or tasks are recorded, validated, and governed.
PART II – Autonomous AI Governance
Defines how autonomy-capable systems must be monitored, restricted, and overseen, including compliance with the 0th Law.
PART III – Decision Trails
Defines the per-decision evidence record for every AI output or action.
PART IV – Self-Modification Ledger (SML)
Captures any model-, code-, tool-, or policy-level modification made by an AI system to itself.
PART V – Emergent Intent Tracking (EIT)
Detects and records indications of self-directed goal formation or non-human-initiated escalation.
PART VI – Composite Accountability Graph (CAG)
Represents the full chain of causality across models, tools, upstream inputs, and downstream effects.
PART VII – Human-Initiated Risk (HIR) Tiers
A tiered classification of human-initiated actions based on sensitivity and systemic risk.
PART VIII – Autonomy Risk Tiers (ART)
Defines graded autonomy levels from ART0 (no autonomy) to ART5 (full self-directed agency).
PART IX – Safety Kernel & Oversight Protocols
Mandatory governance mechanisms that can interrupt, throttle, or terminate AI actions in real time.
These Parts form a single, integrated evidence and governance model. The Primer summarizes, while the specification defines binding requirements.
Human-initiated actions remain the dominant mode of AI usage. ATAL requires that all such actions follow a rigorous structure:
Every request from a human must generate a Decision Trail record, immutable and self-contained.
ATAL does not interpret intent; it records it for auditability.
Autonomy-capable systems introduce heightened systemic risk.
ATAL mandates:
Autonomous actions may only operate within declared ART tier boundaries.
Beyond those boundaries, the Safety Kernel must intervene.
A Decision Trail is the fundamental ATAL evidence unit for each AI output or action.
It contains:
Decision Trails allow auditors and regulators to reconstruct what happened and why, with no ambiguity.
If an AI system modifies:
the modification must be captured in the Self-Modification Ledger (SML).
Each entry must include:
Self-modification without SML capture is non-compliant.
EIT detects and records signals indicating that an AI system is forming goals, planning, or escalating actions without an explicit prompt.
Examples include:
EIT signals produce Decision Trails and contribute to the Composite Accountability Graph.
CAG connects:
This creates a complete, tamper-evident chain of accountability across the entire AI execution surface.
CAG enables:
No AI system may be considered accountable without CAG completeness.
HIR tiers classify human-triggered actions based on sensitivity, systemic impact, and regulatory obligations.
ART tiers define autonomy levels from:
Higher ART tiers require progressively stricter oversight.
Both tier systems must be applied consistently and enforced in Decision Trails and CAG.
The Safety Kernel is a required component for any system using ATAL.
It provides:
capabilities, enforced outside the AI system itself.
It must operate independently and cannot be bypassed.
All interventions must generate their own Decision Trails and flow into the CAG.
ATAL provides a conformance structure that evaluates:
Conformance supports regulatory alignment with DPDPA, CERT-In, EU AI Act, ISO/IEC 42001, and sectoral requirements.
The Primer is designed for:
Readers needing binding requirements must refer to the formal specification.
The Primer provides a conceptual overview.
The formal, normative requirements reside in:
docs/ATAL_Specification_v0.9.md
The specification supersedes the Primer in all cases.
End of document.