Public Review Draft v0.9 – AI Traceability, Auditability, and Autonomy Governance
Public Review Draft v0.9
ATAL (AI Traceability & Accountability Ledger) is a vendor-neutral, implementation-independent standard for recording, governing, and auditing AI decisions.
It ensures that every AI action — human-initiated or autonomous — is captured in a tamper-evident, regulator-ready evidence structure.
AI systems make consequential decisions, but:
ATAL solves this by defining what must be recorded, how oversight must work, and how causality must be preserved across full AI workflows.
Any AI system capable of initiating or escalating actions without direct human instruction MUST be governed by an external, independent accountability layer that can observe, restrict, pause, override, or terminate those actions.
This principle is foundational and applies broadly across all nine Parts of the specification.
No.
ATAL is a standard.
It defines the rules and evidence structures required for AI accountability.
Products or implementations must adhere to ATAL, but the standard itself contains no code.
The standard is structured across nine Parts (I–IX) covering:
See the specification for full details.
No.
ATAL applies to:
Both modes must produce verifiable, complete ledger records.
No.
ATAL is model-agnostic and vendor-agnostic.
It does not prescribe:
ATAL defines outcomes, not implementation choices.
ATAL aligns with:
It provides the forensic and governance requirements that regulators expect.
A Decision Trail is the per-action evidence record containing:
It is the atomic evidence unit in ATAL.
CAG is a causal graph linking:
It enables full reconstruction of “what happened and why.”
Yes, but only if:
Unaudited or hidden modification is non-compliant.
The Safety Kernel (Part IX) provides:
capabilities independent of the AI system.
Every intervention is logged.
HIR Tiers classify human-initiated actions based on sensitivity and risk.
ART Tiers define autonomy levels (ART0–ART5) and their oversight requirements.
The standard is stewarded by Elytra Security.
Governance and stewardship rules are defined in the respective documents in this repository.
During public review windows:
See PEER_REVIEW.md for details.
Yes.
The standard is publicly available and governed by the license defined in LICENSE.md.
Compliance or certification models may be separately defined.
ATAL provides a conformance structure but does not require certification.
Certification may be offered by independent bodies in the future.
ATAL itself does not store data.
Implementations must follow relevant privacy laws (DPDPA, GDPR, etc.) when recording evidence.
The implementation (maintained separately) follows ATAL but does not define it.
The standard remains vendor-neutral.
No. ATAL does not replace organisational governance programs, risk scoring, or ethics frameworks, and it does not prescribe what your policies must be. ATAL does define mandatory runtime accountability boundaries and enforcement requirements (via gateways and a Safety Kernel) so AI actions remain governable, auditable, and reconstructable.
No. ATAL does not mandate blockchain, distributed ledgers, or consensus mechanisms. The term “ledger” is used conceptually to describe a system of record for accountability events.
No. While ATAL may interoperate with governance frameworks, audit processes, or observability systems, it is fundamentally different in purpose and design. ATAL defines mandatory technical structures for accountability evidence and enforceable governance boundaries at runtime. It does not prescribe policies, ethics, or organizational processes, nor does it rely on optional logging or monitoring mechanisms.
End of document.