Documentation Index
Fetch the complete documentation index at: https://verdictweight.dev/llms.txt
Use this file to discover all available pages before exploring further.
What the EU AI Act is
The EU AI Act (Regulation (EU) 2024/1689) entered into force in 2024 with a phased implementation timeline through 2026. The Act takes a risk-based approach, classifying AI systems into four risk tiers:| Tier | Treatment |
|---|---|
| Unacceptable risk | Prohibited (social scoring, real-time biometric ID with limited exceptions, etc.). |
| High risk | Heavily regulated. Articles 9-15 establish obligations. |
| Limited risk | Transparency obligations under Article 50. |
| Minimal risk | Largely unregulated. |
Coverage summary
VERDICT WEIGHT addresses the technical controls underlying Articles 9 (risk management), 10 (data governance), 12 (record-keeping), 13 (transparency), 14 (human oversight), and 15 (accuracy, robustness, cybersecurity). It does not address Article 11 (technical documentation, which is operator-produced) or the conformity-assessment procedures, which are organizational.Article 9: Risk management system
Article 9 requires a continuous, iterative risk management system covering the AI system’s lifecycle.| Article 9 requirement | How VERDICT WEIGHT supports it |
|---|---|
| 9(2)(a) – identification and analysis of risks | The failure-class taxonomy F1-F8 (Completeness proof) is a structured risk identification framework specific to confidence-based AI decisioning. |
| 9(2)(b) – estimation and evaluation of risks | Reliability error, ablation studies, and adversarial detection rates provide quantitative risk evaluation. |
| 9(2)(c) – evaluation of other risks based on data | Out-of-distribution detection via Stream 2 and Stream 4 provides ongoing risk evaluation under post-market data. |
| 9(2)(d) – risk management measures | The composition rule’s veto / abstention / aggregation routing is the framework’s risk-response logic. |
| 9(7) – testing throughout development | The 673-test suite (Coverage overview) and the IEEE-grade hardening procedure constitute evidence of systematic testing. |
| 9(9) – specific consideration for vulnerable groups | Operator-supplied per use case; the framework provides per-stream interpretability that supports group-specific impact analysis. |
Article 10: Data and data governance
Article 10 governs training, validation, and testing data.| Article 10 requirement | How VERDICT WEIGHT supports it |
|---|---|
| 10(2)(a) – relevant design choices for data | NVD/KEV methodology documents the dataset construction rules used in validation. |
| 10(2)(b) – data collection processes documented | Validation datasets are constructed from public, citable sources with documented filtering. |
| 10(2)(g) – identification of relevant data gaps | Known limitations is an explicit enumeration of data gaps in the published validation. |
| 10(3) – relevant, representative, free of errors | Class balance, snapshot strategy, and exclusion rules are documented and reproducible. |
VERDICT WEIGHT is a scoring layer, not a model. The operator’s compliance with Article 10 depends primarily on the data used to train and validate the upstream model stack. The framework’s calibration refit on deployment-representative data is one input to that compliance, not a substitute for it.
Article 12: Record-keeping
Article 12 requires automatic recording of events (“logs”) sufficient to support traceability.| Article 12 requirement | How VERDICT WEIGHT supports it |
|---|---|
| 12(1) – automatic logging of events | The cryptographic audit chain (Stream 7) records every scoring event automatically. |
| 12(2)(a) – period of use | Timestamps on every record establish operational lifecycle data. |
| 12(2)(b) – reference database against which input checked | Registry hash and configuration version recorded with each event. |
| 12(2)(c) – input data resulting in match | Canonicalized evidence payload preserved per scoring call. |
| 12(2)(d) – identification of natural persons involved in verification | Operator identity is recorded for kill-switch and configuration events. |
Article 13: Transparency and provision of information
Article 13 requires high-risk systems to be transparent enough that deployers can interpret and use the output appropriately.| Article 13 requirement | How VERDICT WEIGHT supports it |
|---|---|
| 13(1) – designed for transparency | Per-stream contributions are exposed in stream_breakdown; abstention and abort outcomes are accompanied by explicit reason strings. |
| 13(3)(a) – identity and contact details of provider | Documentation makes provider identity unambiguous. |
| 13(3)(b)(i) – characteristics, capabilities, limitations | This documentation site is the primary artifact. |
| 13(3)(b)(ii) – level of accuracy, robustness, cybersecurity | Reported in Validation & Research with reproducibility instructions. |
| 13(3)(b)(iii) – circumstances which may lead to risks | Threat model and known limitations enumerated explicitly. |
| 13(3)(b)(iv) – performance regarding specific persons or groups | Operator-supplied per deployment; framework supports per-group analysis through audit-chain replay. |
| 13(3)(b)(v) – specifications of input data | Evidence-payload schema documented in Scorer and Pipeline. |
| 13(3)(b)(vi) – information enabling output interpretation | confidence, should_act, outcome, and stream_breakdown are designed precisely for this. |
| 13(3)(d) – human oversight measures | Configurable thresholds, abstention rules, and escalation routing all support oversight. |
Article 14: Human oversight
Article 14 requires high-risk systems to be designed for effective human oversight.| Article 14 requirement | How VERDICT WEIGHT supports it |
|---|---|
| 14(2) – appropriate oversight measures | Calibrated confidence makes thresholding meaningful; abstention surfaces ambiguity for review. |
| 14(4)(a) – understand capabilities and limitations | Documentation, per-stream interpretability, and known-limitations enumeration support understanding. |
| 14(4)(b) – remain aware of automation bias | Calibrated confidence with documented reliability error counters automation bias by giving operators ground-truth-aligned uncertainty. |
| 14(4)(c) – correctly interpret the output | Per-stream breakdown plus reason strings support correct interpretation. |
| 14(4)(d) – decide not to use or to override | Override of should_act is the explicit responsibility of the calling system; the framework records overrides in the audit chain. |
| 14(4)(e) – intervene or interrupt | The kill switch (Stream 8) provides an authoritative interruption primitive. |
Article 15: Accuracy, robustness, cybersecurity
Article 15 requires high-risk systems to achieve appropriate levels of accuracy, robustness, and cybersecurity.| Article 15 requirement | How VERDICT WEIGHT supports it |
|---|---|
| 15(1) – appropriate level of accuracy throughout lifecycle | Calibration (Stream 5) addresses this directly; refit procedures sustain it. |
| 15(2) – relevant accuracy metrics declared | REL, AUC, Brier, ECE all declared with confidence intervals. See Calibration curves. |
| 15(3) – robust against errors, faults, inconsistencies | Streams 2, 3, 4 each address a distinct robustness dimension. |
| 15(4) – resilient to attempts at altering use, output, performance | Curveball detection (Stream 6) addresses adversarial input; registry kill switch (Stream 8) addresses scoring-layer compromise. |
| 15(5) – resilient against unauthorised third parties exploiting vulnerabilities | Hash-chain integrity (Stream 7) detects tampering; registry hashing detects configuration manipulation. |
Article 50: Transparency obligations for certain AI systems
Article 50 imposes additional transparency obligations for AI systems that interact with natural persons or generate content. These are largely operator obligations rather than framework obligations, but VERDICT WEIGHT supports them by:- Producing structured, machine-readable confidence and reasoning data that downstream systems can surface to end users.
- Providing per-stream interpretability sufficient to support human-readable explanations.
What the operator still owns
The framework does not address:- Article 11 technical documentation — this is the operator’s deployment-specific documentation, distinct from the framework’s documentation.
- Conformity assessment — the formal procedure under Articles 43-44 is an operator/notified-body activity.
- Quality management system (Article 17) — organizational, not technical.
- Post-market monitoring plan (Article 72) — the framework provides telemetry; the plan is operator-defined.
- Reporting of serious incidents (Article 73) — the audit chain provides the data; the reporting workflow is operator-defined.
- Fundamental rights impact assessment (Article 27) — for deployers in scope.
Pre-deployment checklist
For a deployer preparing for EU AI Act conformity assessment of a system using VERDICT WEIGHT:Identify applicability
Confirm the deployment falls within the high-risk classification under Annex III or as a safety component.
Map technical documentation
Use the per-article mappings above to identify which framework artifacts cover which Article 11 documentation requirements.
Refit calibration
Refit Stream 5 on deployment-representative data and document the refit procedure for Article 15(1).
Configure audit chain
Configure the audit chain with operator-controlled signing keys and durable storage for Article 12.
Document oversight
Document the human oversight measures wired around the framework’s outputs for Article 14.