Skip to main content

Documentation Index

Fetch the complete documentation index at: https://verdictweight.dev/llms.txt

Use this file to discover all available pages before exploring further.

What the EU AI Act is

The EU AI Act (Regulation (EU) 2024/1689) entered into force in 2024 with a phased implementation timeline through 2026. The Act takes a risk-based approach, classifying AI systems into four risk tiers:
TierTreatment
Unacceptable riskProhibited (social scoring, real-time biometric ID with limited exceptions, etc.).
High riskHeavily regulated. Articles 9-15 establish obligations.
Limited riskTransparency obligations under Article 50.
Minimal riskLargely unregulated.
Most consequential AI deployments — including those VERDICT WEIGHT is built for — fall into the high-risk tier. The Act’s high-risk obligations are the operative compliance challenge.

Coverage summary

VERDICT WEIGHT addresses the technical controls underlying Articles 9 (risk management), 10 (data governance), 12 (record-keeping), 13 (transparency), 14 (human oversight), and 15 (accuracy, robustness, cybersecurity). It does not address Article 11 (technical documentation, which is operator-produced) or the conformity-assessment procedures, which are organizational.

Article 9: Risk management system

Article 9 requires a continuous, iterative risk management system covering the AI system’s lifecycle.
Article 9 requirementHow VERDICT WEIGHT supports it
9(2)(a) – identification and analysis of risksThe failure-class taxonomy F1-F8 (Completeness proof) is a structured risk identification framework specific to confidence-based AI decisioning.
9(2)(b) – estimation and evaluation of risksReliability error, ablation studies, and adversarial detection rates provide quantitative risk evaluation.
9(2)(c) – evaluation of other risks based on dataOut-of-distribution detection via Stream 2 and Stream 4 provides ongoing risk evaluation under post-market data.
9(2)(d) – risk management measuresThe composition rule’s veto / abstention / aggregation routing is the framework’s risk-response logic.
9(7) – testing throughout developmentThe 673-test suite (Coverage overview) and the IEEE-grade hardening procedure constitute evidence of systematic testing.
9(9) – specific consideration for vulnerable groupsOperator-supplied per use case; the framework provides per-stream interpretability that supports group-specific impact analysis.

Article 10: Data and data governance

Article 10 governs training, validation, and testing data.
Article 10 requirementHow VERDICT WEIGHT supports it
10(2)(a) – relevant design choices for dataNVD/KEV methodology documents the dataset construction rules used in validation.
10(2)(b) – data collection processes documentedValidation datasets are constructed from public, citable sources with documented filtering.
10(2)(g) – identification of relevant data gapsKnown limitations is an explicit enumeration of data gaps in the published validation.
10(3) – relevant, representative, free of errorsClass balance, snapshot strategy, and exclusion rules are documented and reproducible.
VERDICT WEIGHT is a scoring layer, not a model. The operator’s compliance with Article 10 depends primarily on the data used to train and validate the upstream model stack. The framework’s calibration refit on deployment-representative data is one input to that compliance, not a substitute for it.

Article 12: Record-keeping

Article 12 requires automatic recording of events (“logs”) sufficient to support traceability.
Article 12 requirementHow VERDICT WEIGHT supports it
12(1) – automatic logging of eventsThe cryptographic audit chain (Stream 7) records every scoring event automatically.
12(2)(a) – period of useTimestamps on every record establish operational lifecycle data.
12(2)(b) – reference database against which input checkedRegistry hash and configuration version recorded with each event.
12(2)(c) – input data resulting in matchCanonicalized evidence payload preserved per scoring call.
12(2)(d) – identification of natural persons involved in verificationOperator identity is recorded for kill-switch and configuration events.
VERDICT WEIGHT’s audit chain is purpose-built to satisfy Article 12 and exceeds its requirements in two respects: cryptographic integrity (Article 12 does not specify integrity, but tamper-evident logging is operationally far stronger than appendable logging) and full reproducibility (the chain enables deterministic replay, not just retrospective inspection).

Article 13: Transparency and provision of information

Article 13 requires high-risk systems to be transparent enough that deployers can interpret and use the output appropriately.
Article 13 requirementHow VERDICT WEIGHT supports it
13(1) – designed for transparencyPer-stream contributions are exposed in stream_breakdown; abstention and abort outcomes are accompanied by explicit reason strings.
13(3)(a) – identity and contact details of providerDocumentation makes provider identity unambiguous.
13(3)(b)(i) – characteristics, capabilities, limitationsThis documentation site is the primary artifact.
13(3)(b)(ii) – level of accuracy, robustness, cybersecurityReported in Validation & Research with reproducibility instructions.
13(3)(b)(iii) – circumstances which may lead to risksThreat model and known limitations enumerated explicitly.
13(3)(b)(iv) – performance regarding specific persons or groupsOperator-supplied per deployment; framework supports per-group analysis through audit-chain replay.
13(3)(b)(v) – specifications of input dataEvidence-payload schema documented in Scorer and Pipeline.
13(3)(b)(vi) – information enabling output interpretationconfidence, should_act, outcome, and stream_breakdown are designed precisely for this.
13(3)(d) – human oversight measuresConfigurable thresholds, abstention rules, and escalation routing all support oversight.

Article 14: Human oversight

Article 14 requires high-risk systems to be designed for effective human oversight.
Article 14 requirementHow VERDICT WEIGHT supports it
14(2) – appropriate oversight measuresCalibrated confidence makes thresholding meaningful; abstention surfaces ambiguity for review.
14(4)(a) – understand capabilities and limitationsDocumentation, per-stream interpretability, and known-limitations enumeration support understanding.
14(4)(b) – remain aware of automation biasCalibrated confidence with documented reliability error counters automation bias by giving operators ground-truth-aligned uncertainty.
14(4)(c) – correctly interpret the outputPer-stream breakdown plus reason strings support correct interpretation.
14(4)(d) – decide not to use or to overrideOverride of should_act is the explicit responsibility of the calling system; the framework records overrides in the audit chain.
14(4)(e) – intervene or interruptThe kill switch (Stream 8) provides an authoritative interruption primitive.

Article 15: Accuracy, robustness, cybersecurity

Article 15 requires high-risk systems to achieve appropriate levels of accuracy, robustness, and cybersecurity.
Article 15 requirementHow VERDICT WEIGHT supports it
15(1) – appropriate level of accuracy throughout lifecycleCalibration (Stream 5) addresses this directly; refit procedures sustain it.
15(2) – relevant accuracy metrics declaredREL, AUC, Brier, ECE all declared with confidence intervals. See Calibration curves.
15(3) – robust against errors, faults, inconsistenciesStreams 2, 3, 4 each address a distinct robustness dimension.
15(4) – resilient to attempts at altering use, output, performanceCurveball detection (Stream 6) addresses adversarial input; registry kill switch (Stream 8) addresses scoring-layer compromise.
15(5) – resilient against unauthorised third parties exploiting vulnerabilitiesHash-chain integrity (Stream 7) detects tampering; registry hashing detects configuration manipulation.

Article 50: Transparency obligations for certain AI systems

Article 50 imposes additional transparency obligations for AI systems that interact with natural persons or generate content. These are largely operator obligations rather than framework obligations, but VERDICT WEIGHT supports them by:
  • Producing structured, machine-readable confidence and reasoning data that downstream systems can surface to end users.
  • Providing per-stream interpretability sufficient to support human-readable explanations.

What the operator still owns

The framework does not address:
  • Article 11 technical documentation — this is the operator’s deployment-specific documentation, distinct from the framework’s documentation.
  • Conformity assessment — the formal procedure under Articles 43-44 is an operator/notified-body activity.
  • Quality management system (Article 17) — organizational, not technical.
  • Post-market monitoring plan (Article 72) — the framework provides telemetry; the plan is operator-defined.
  • Reporting of serious incidents (Article 73) — the audit chain provides the data; the reporting workflow is operator-defined.
  • Fundamental rights impact assessment (Article 27) — for deployers in scope.

Pre-deployment checklist

For a deployer preparing for EU AI Act conformity assessment of a system using VERDICT WEIGHT:
1

Identify applicability

Confirm the deployment falls within the high-risk classification under Annex III or as a safety component.
2

Map technical documentation

Use the per-article mappings above to identify which framework artifacts cover which Article 11 documentation requirements.
3

Refit calibration

Refit Stream 5 on deployment-representative data and document the refit procedure for Article 15(1).
4

Configure audit chain

Configure the audit chain with operator-controlled signing keys and durable storage for Article 12.
5

Document oversight

Document the human oversight measures wired around the framework’s outputs for Article 14.
6

Engage notified body

For Annex III high-risk systems requiring third-party conformity assessment, engage a notified body for review.

Reproducibility

Every claim in this mapping resolves to a specific page in the documentation, a specific module in the source code, or a specific record format in the audit chain. As with the NIST AI RMF mapping, the document is intended to be read with the codebase open.