Skip to main content

Documentation Index

Fetch the complete documentation index at: https://verdictweight.dev/llms.txt

Use this file to discover all available pages before exploring further.

Install

pip install verdict-weight
Python 3.10+ is required. No GPU required. No external services contacted at runtime.

Score a single decision

from verdict_weight import Scorer

scorer = Scorer()

# Submit evidence from your model stack.
result = scorer.score(
    prediction="approve",
    evidence={
        "model_logits": [0.82, 0.18],
        "retrieval_score": 0.91,
        "policy_check": True,
        "prior": 0.6,
    },
)

print(result.confidence)        # calibrated confidence in [0, 1]
print(result.should_act)        # boolean gate decision
print(result.stream_breakdown)  # per-stream contributions
print(result.audit_id)          # cryptographic chain identifier

Interpreting the output

The result object exposes four primary surfaces:

confidence

A calibrated scalar in [0, 1]. Reliability is established empirically — see calibration curves.

should_act

A boolean produced by thresholding confidence against the configured policy. Defaults are conservative.

stream_breakdown

Per-stream contributions, including which streams abstained, agreed, or disagreed.

audit_id

The hash-chain identifier for this scoring event. Use it to retrieve the signed audit record.

Verify the audit chain

from verdict_weight import AuditChain

chain = AuditChain.load("verdict.log")

# Verify the entire chain is intact.
assert chain.verify(), "Chain integrity violation"

# Retrieve a specific decision by audit_id.
record = chain.get(result.audit_id)
print(record.signed_at, record.evidence_hash)
If chain.verify() returns False, the registry kill switch (Stream 8) will fire on the next scoring call. This is by design.

Next steps

Production patterns

Integrate VERDICT WEIGHT into a streaming inference pipeline.

Tuning

Adjust per-stream weights and thresholds for your deployment.

Validation results

Review the benchmarks before deciding to trust it.

Defense use cases

Why this framework is built for adversarial environments.
The code samples on this page reflect the canonical SDK shape. If your installed version differs, defer to the API reference and the package source on GitHub.