Skip to main content

Documentation Index

Fetch the complete documentation index at: https://verdictweight.dev/llms.txt

Use this file to discover all available pages before exploring further.

When to use a pipeline

Use Pipeline when:
  • You are scoring decisions in a long-running service.
  • You need streaming or batched throughput rather than one-call-at-a-time scoring.
  • You want a single point of audit-chain management across many requests.
For ad-hoc or single-decision scoring, use Scorer directly.

Basic streaming pipeline

from verdict_weight import Pipeline

pipeline = Pipeline.from_config("config.yaml")

for request in incoming_stream:
    result = pipeline.score(request.prediction, request.evidence)
    handle(result)

Batched pipeline

results = pipeline.score_batch([
    {"prediction": p1, "evidence": e1},
    {"prediction": p2, "evidence": e2},
    ...
])

for req, res in zip(requests, results):
    handle(req, res)
Batched scoring is the preferred path when latency tolerance allows. It amortizes audit-chain I/O and stream-wide state operations across the batch.

Concurrency model

The framework supports three concurrency modes. Choose based on your deployment:

Single-threaded

One scorer, one event loop. Simplest. Recommended for low-volume or audit-heavy deployments.

Multi-process

One scorer per process, separate audit logs reconciled offline. Recommended for high-throughput deployments.

Async

Async scorer with cooperative event loop. I/O-bound stream evaluation overlaps cleanly.

Production checklist

Before promoting a pipeline to production, verify:
1

Audit log path is durable

The configured log path lives on a durable, replicated, append-only-friendly filesystem.
2

Verification is run on startup

The pipeline calls audit_chain.verify() on startup and refuses to run if it fails.
3

Kill-switch handling is wired

Callers handle outcome == "abort" distinctly from normal flow. Aborts must not be silently retried.
4

Calibration map is current

The calibration map has been refitted on validation data representative of the deployment domain. See Calibration.
5

Self-check is logged on startup

The framework’s self-check report is captured at startup and shipped to the operations log.

Error handling

The pipeline raises a small set of well-defined exception types. Catch them at the boundary appropriate to your application:
ExceptionWhen raised
EvidenceErrorThe evidence payload was malformed or missing required keys.
ConfigErrorThe configuration is invalid or has changed since startup hash.
ChainIntegrityErrorThe audit chain failed verification. The kill switch will be raised.
FrameworkAbortThe kill switch raised by Stream 8. Do not catch and retry.
AbstainOutcomeOptional; only raised if the pipeline is configured to surface abstention as exception.
Any other exception escaping the pipeline is a bug; please report it on GitHub.