Audit and Judgments
In Tessera, our Python implementation of JAM's auditing and judgment system (gray paper Section 17), we handle the off-chain validation of work-reports through a modular AuditEngine class. This engine manages tranches (8-second intervals), announcements (CE 144), judgments (CE 145), no-shows, negative judgments, and disputes, ensuring blocks meet the U audited condition (>682 positives per report). Below is an explanation with code snippets from our codebase, showing how we process these elements step-by-step.
Key Features
- Tranches: 8-second cycles for assignments and updates.
- Auditing Load: 10 audits per validator, 30 per report (1,023 validators, 341 reports).
- Finality Tie-In: ( U_B ) prerequisite for GRANDPA (>682 votes, Section 10 gray paper).
Quick Start
from tessera.audit_engine import AuditEngine
engine = AuditEngine(block_id=500)
status = engine.run(tranche_data) # Extracts judgments, updates U_B
Auditing Lifecycle and Tranches
This page explains how our implementation triggers auditing and issues judgments from the assurance transition, how tranche timing works, and how we filter work-reports for auditing. It includes concrete Python snippets reflecting the production code paths (Gray Paper Section 17).
Lifecycle overview
- Block announcement happens every 6 seconds. Immediately after, the state transition runs (Assurances, PVM, etc.).
- Assurance transition may return newly available work-reports. If any are available, we trigger the asynchronous auditing pipeline.
- The AuditEngine groups time into tranches (8-second periods) and schedules/records judgments, no-shows, and negative judgments accordingly.
### Assurance transition produces available reports
_, newly_avail_wrs = Assurances.transition(pre_state, self, block)
if len(newly_avail_wrs) > 0:
logger.info(
"Newly available WRs", count=len(newly_avail_wrs),
wrs=[wr.hash().hex()[:16] + "..." for wr in newly_avail_wrs],
)
At the end of state transition, we trigger the audit pipeline if there are any new work-reports:
from jam.audit.audit_engine import AuditEngine
audit_engine = AuditEngine()
if len(newly_avail_wrs) > 0:
# Fire-and-forget; runs the tranche-aware auditing loop
asyncio.create_task(
audit_engine.run(block=block, newly_avail_wrs=newly_avail_wrs)
)
Timing and tranches
- Block announcements are aligned to slots; each slot is 6 seconds.
- Auditing operates in 8-second tranches.
Given a block header with a slot number, we compute the block announcement timestamp and tranche boundaries:
# Given constants
SLOT_PERIOD = 6 # seconds
AUDIT_PERIOD = 8 # seconds
# Time when this block was announced
curr_ts = SLOT_PERIOD * int(block.header.slot)
# Start of the next audit tranche window
next_ts = curr_ts + AUDIT_PERIOD
# Current tranche index for this header (implementation-owned helper)
tranche_index = audit.tranche_index(header=block.header)
Intuition: if block1 is announced at 12:00:00, block2 arrives at 12:00:06. The tranche windows tick every 8 seconds (… :00, :08, :16, …). Newly available work-reports from the assurance transition at 12:00:00 are slotted for auditing with respect to the current tranche index.
Selecting auditable reports
From the prior state rho (the sequence of optional work-report states per core) and the fresh set of newly available reports, we construct the auditable sequence q as per equations 17.3/17.4. Only the reports that both exist in rho and just became available are forwarded for auditing; empty slots are represented with Null, preserving alignment with cores.
@staticmethod
def auditable_reports(
prior_state: TypedVector[OptionalWorkReportState],
newly_rep: TypedVector[WorkReport]
) -> OptionalReports:
"""
Equation: 17.3, 17.4
Define the sequence of work-reports which we may be required to audit as q = [ |R ?]c,
a sequence of length equal to the number of core.
Args:
prior_state: p (rho) block prior state
newly_rep: Work Report pending which has just become available.
"""
auditable_reports = OptionalReports([])
for r in prior_state:
report_state: (WorkReportState | Null) = r.unwrap()
if isinstance(report_state, WorkReportState) and report_state.report in newly_rep:
auditable_reports.append(OptionalReport(report_state.report))
else:
auditable_reports.append(OptionalReport(Null))
return auditable_reports
This preserves one-to-one indexing with cores while ensuring we only audit the newly available subset.
Work-Report Selection for Tranches in Auditing
-
This section details Tessera's implementation of work-report selection for auditing (gray paper Section 17.3, Equations 17.5, 17.6, 17.14, 17.15), where validators are assigned ~10 reports per tranche. For tranche 0, we use VRF to shuffle and select initial reports. For tranche > 0, we adapt based on no-shows from the previous tranche.
-
Implementation Details
Input: Entropy source (header VRF), Bandersnatch key, unaudited reports (OptionalReports of length 341), tranche=0. Output: TypedVector[CoreReport] with up to 10 assigned reports. Steps: Map to CoreOptionalReport tuples, shuffle indices, lookup and filter non-null.
### Tranche 0 (Equations 17.5/17.6)
* Construct `core_report` mapping core indices to optional reports.
* Derive entropy via Bandersnatch VRF signature.
* Shuffle indices deterministically; take first up to `AUDIT_REPORT_ASSIGNED` non-null entries.
* Emit announcement with first-tranche evidence (Bandersnatch signature only).
@classmethod
def verifiable_random_selection(
cls,
entropy_source: BandersnatchVrfSignature,
bandersnatch_key: Bytes[32],
unaudited_report: OptionalReports,
tranche: Tranche,
) -> TypedVector[CoreReport]:
entropy = vrf_output(
cls.vrf_signature_bandersnatch(
entropy_source=entropy_source,
bandersnatch_key=bandersnatch_key,
tranche=tranche,
w_r=None,
)
)
# ---------------------------- mapping q's reports as tuple[CoreIndex, Option[WorkReport]] ---------------------
core_report = TypedVector[CoreOptionalReport]([])
for c, wr in enumerate(unaudited_report):
work_report = wr.unwrap
value = CoreOptionalReport(core_index=CoreIndex(c), work_report=wr)
core_report.append(value)
# ---------------------------------- Array same as size of core_report and shuffle -----------------------------
index_array = TypedVector[Uint[32]]([])
for i in range(len(unaudited_report)):
index_array.append(Uint[32](i))
# ------------------------- shuffle function that shuffle array based on entropy (for randomness) --------------
shuffle_array = shuffle(entropy, index_array)
# ---------------------------------------- updated shuffle auditing list ---------------------------------------
lookup = {cr.core_index: cr for cr in core_report}
updated_array = [lookup[CoreIndex(i)] for i in shuffle_array]
# ------------------------------------------ take initial 10 reports -------------------------------------------
# Eq. 17.5 : ao = {(c, w) | (c, w) E p... + 10, w != Phi }
shuffle_not_null = TypedVector[CoreReport](
[
CoreReport(
core_index=c_r.core_index,
work_report=c_r.work_report.unwrap()
)
for c_r in updated_array
if c_r.work_report.unwrap() is not Null
][:AUDIT_REPORT_ASSIGNED]
)
return shuffle_not_null
- For tranches beyond 0, selection adapts based on no-shows from the previous tranche (Equation 17.14/17.15). For each unaudited report, a new VRF decides assignment if the output <
m_n(no-show count), ensuring coverage without overload.
### Tranche n > 0 (Equations 17.14/17.15)
* For each unaudited report, compute VRF output parameterized by `(header_entropy, tranche_index, report_hash)`.
* Let `m_n = no_shows + negatives` (derived from previous tranche records).
* Accept assignment if `(VAL_COUNT / (256 * AUDIT_BIAS_FACTOR)) * vrf_output_byte0 < m_n`.
* This probabilistic throttle minimizes redundant audits while guaranteeing coverage escalation under inactivity or dispute.
@classmethod
async def vrf_tranche(
cls,
header_hash: HeaderHash,
tranche: Tranche,
entropy: BandersnatchVrfSignature,
unaudited_wrs: OptionalReports,
) -> TypedVector[CoreReport]:
from jam.settings import settings
from jam.storage.tranche_audit_store import tranche_store
tranche_index = tranche.tranche_index
# DEFINE EMPTY LIST
assigned_wrs = TypedVector[CoreReport]([])
for wr in unaudited_wrs:
rep = wr.unwrap()
if rep != Null:
random_quantity = cls.vrf_signature_bandersnatch(
bandersnatch_key=settings.bandersnatch_private,
entropy_source=entropy,
tranche=tranche,
w_r=rep,
)
# HERE WE CHECK VRF CONDITION
vrf_check = (VALIDATOR_COUNT / (256 * AUDIT_BIAS_FACTOR)) * vrf_output(
random_quantity
)[0]
# NO-JUDGMENT FOR THAT WORK REPORT
prev_tranche_index = tranche_index - TrancheIndex(1)
prev_tranche = Tranche(tranche_index=prev_tranche_index, header_hash=header_hash)
wr_hash = rep.hash()
state = await tranche_store.get_state(tranche=prev_tranche)
records = state.records.get(wr_hash)
# Count of no-judgment and negative judgment for that work_report
m_n = len(records.announces) - len(records.true_votes)
if vrf_check < m_n:
assigned_report = CoreReport(core_index=CoreIndex(rep.core_index), work_report=rep)
assigned_wrs.append(assigned_report)
return assigned_wrs
Announcements, Judgments (True/False) and No-shows
- Judgment (positive/negative): produced by the AuditEngine after verifying the report within its tranche window. These contribute to the report’s audit tally (toward the U_B condition).
- No-show: if an expected audit assignment isn’t fulfilled within its tranche window, it’s recorded as a no-show for that slot and handled according to scheduling rules in the next tranche(s).
- Negative/false judgment: if verification fails or inconsistency is detected, the engine records a negative judgment and may flag the validator/report for dispute resolution.
The asynchronous audit loop keeps these counters per report and per tranche, emitting events that higher layers can use for dispute and finality logic.
Example timeline
- 12:00:00 — Block1 announced (slot boundary at 6s cadence). Assurance transition emits N newly available work-reports.
- 12:00:00 — We trigger
AuditEngine.run(block=block1, newly_avail_wrs=N)asynchronously. - 12:00:00 — Engine computes
curr_ts,tranche_index, and filters toauditable_reportsusing rho. - 12:00:00–12:00:08 — Within the current tranche window, assignments execute. Announcements Judgments are recorded. No-shows/negatives are tracked.
- 12:00:06 — Block2 announced; its assurance transition may produce more newly available reports, which trigger another
runcall aligned to its tranche index.
Architecture Overview
The auditing subsystem consists of four cooperating layers:
- Trigger Layer (Assurance transition) – Detects newly available work-reports and invokes the audit loop.
- Time & Tranche Layer (
AuditEngine.run) – Slices wall-clock time into 8-second tranches, derivesTrancheIndex, and advances state. - Selection & Announcement Layer (
Audit.verifiable_random_selection,Audit.vrf_tranche,Auditor.assignment_wrs, CE 144 protocol) – Decides which validator audits which reports and publishes verifiable announcements. - Judgment Layer (
Auditor.judgment_process, CE 145 protocol) – Produces and distributes signed validity attestations, updating tranche state and feeding the global audited condition U.
Persistent state across tranches lives in tranche_store and includes:
unaudited_list– A per-core optional sequence of newly available work-reports.- Records per work-report hash:
announces,true_votes(valid judgments),false_votes(invalid judgments). - Carry-forward mechanics for tranche > 0 via
TrancheState.carry_forward().
Core Components
| Component | Responsibility | Key Methods |
|---|---|---|
AuditEngine | Orchestrates tranche loop & termination criteria | run() |
Audit | Implements spec equations & VRF selection logic | auditable_reports, verifiable_random_selection, vrf_tranche, tranche_index, refine, judgment_signature, validator_announcement_statement |
Auditor | Performs report assignment, announcements & judgments | assignment_wrs, announcement, judgment_process, negative_judgments |
| CE 144 Protocol | Distributes audit announcements | AuditAnnouncement.transmit, req_intercept |
| CE 145 Protocol | Distributes judgments | JudgmentPublication.transmit, req_intercept, handle_judgment |
tranche_store | Persistence of per-tranche audit progress | save_state, get_state, records_announcement, update_judgment, fetch_rep_tranche, remove_block_history |
Utils | Helper logic: subsequent tranche evidence, dispute extrinsic building, audited check | is_tranche, dispute_ext, block_audited, fetch_report |
Data Flow (High-Level Sequence)
Block Announced -> Assurance Transition -> Newly Available WRs? -> Start AuditEngine.run
-> Build auditable_reports (q) from prior rho & new set
-> Determine tranche_index
If tranche 0:
Initialize TrancheState (unaudited_list = q)
Selection via verifiable_random_selection (VRF shuffle)
CE144 announcement broadcast
Refine & CE145 judgments
Else tranche > 0:
Load prev_state, carry forward
Compute subsequent tranche evidence (no-shows / negative triggers)
If no evidence AND block audited => mark block audited & cleanup
Else perform VRF-based conditional selection (vrf_tranche)
CE144 announcement + CE145 judgments
-> Sleep until end of tranche window & iterate until audited
Tranche Mechanics
Tranches advance strictly every AUDIT_PERIOD seconds. For a given block header:
curr_ts = SLOT_PERIOD * slotanchors the block's temporal origin.next_ts = curr_ts + AUDIT_PERIODbounds the tranche window.tranche_indexis spec-driven (Equation 17.13 equivalent) using wall-clock minus slot baseline divided by tranche length.
Termination conditions within run():
- Block already finalized earlier (slot regression) → abort.
- No subsequent evidence & block audited after judgments → mark audited, remove history.
- Continue loop if audited condition unmet.
Announcement Distribution (CE 144)
Payload structure (CE144Data):
tranche_announcement– header_hash, tranche_index, announcement list (core_index + work_report_hash tuples) + Ed25519 signature committing to selection.evidence– Either FirstTrancheEvidence (Bandersnatch VRF signature) or SubsequentTrancheEvidence (Bandersnatch signature + list of NoShow records).- Length fields (
len_a,len_b) guard framing integrity;is_validasserts internal consistency.
Reception (req_intercept):
- Decode and persist announcement into tranche store.
- Validate lengths & signature context (external signature verification is implied by higher layers).
Judgment Publication (CE 145)
Judgment structure:
epoch_index– Derived fromstate.tau / EPOCH_LENGTH(current or prior epoch permitted).validator_index,work_report_hash,validity(U8: 1 for valid, 0 for invalid),ed25519_signature.- Framed by
CE145Datawith a single length preface.
Judgment flow:
- Local refinement via
audit.refine(wr)sets preliminary validity. - Signature produced (
Audit.judgment_signature) binding hash + validity. - Broadcast to peers; peers intercept, epoch-match and enqueue verification / persistence.
- Negative judgments trigger
negative_judgmentspath: independent re-refinement and re-publication to mitigate false negatives / enforce redundancy.
Tranche State & Persistence
TrancheState fields (conceptual):
unaudited_list: OptionalReports aligned to cores.records: map of work_report_hash →{announces, true_votes, false_votes}.- Each field is stored explicitly. Braces are part of documentation, all identifiers shown here are literals in the tranche record structure.
- Carry-forward replicates unresolved items & accumulates vote history.
Key invariants:
- Announcements precede judgments for each tranche.
- No-show calculation relies on previous tranche:
count(announces) - count(true_votes). - Negative judgments immediately enrich
false_votesand can fuel subsequent tranche escalation.
Audited Condition (U) Evaluation
The audited predicate for a single report aligns with Gray Paper definition:
U(report) is true if:
(no negative judgments AND all required tranche assignments satisfied)
OR
(#positive_judgments > 2/3 * VAL_COUNT)
Implementation path:
Utils.block_auditediterates newly available reports, embedding above logic with data fromtranche_storefor current tranche.- Once all reports satisfy U, block flagged audited (
BlockView.mark_as_audited).
Negative Judgment Handling & Escalation
On receiving a negative judgment:
- Locate tranche via
fetch_rep_tranche. - Re-refine locally (
audit.refine) ensuring deterministic recomputation. - If confirmed invalid, propagate updated judgment (CE 145) to widen awareness.
- Escalation influences next tranche selection probability through increased
m_n.
async def negative_judgments(self, judgment: Judgment, tranche: Tranche):
""" this handle only negative judgments, do refine and transmit judgments. """
from jam.settings import settings
from jam.storage.tranche_audit_store import tranche_store, TrancheIndex
from jam.audit.audit import Audit
from jam.audit.utils import Utils
audit = Audit()
utils = Utils()
validator_index = settings.validator_index
wr_hash = judgment.work_report_hash
# process refine results
wr = await utils.fetch_report(wr_hash=wr_hash)
update_validity = await audit.refine(wr=wr)
epoch_index = EpochIndex(math.floor(state.tau / EPOCH_LENGTH))
ed25519_signature = Audit.judgment_signature(wr_hash=judgment.work_report_hash, validity=update_validity)
judgment = Judgment(
epoch_index= epoch_index,
validator_index= validator_index,
validity= update_validity,
work_report_hash= judgment.work_report_hash,
ed25519_signature= Ed25519Signature(ed25519_signature),
)
# ------------------- Save judgment in Tranche State ---------------------------------------------
await tranche_store.update_judgment(
tranche= tranche,
judgment= judgment,
ed25519_public= settings.ed25519_public
)
data = CE145Data(len_a=U32(len(judgment.encode())), judgment=judgment)
response = await self.transmit(data=data)
Dispute & Cleanup Hooks
Utils.dispute_ext may craft dispute extrinsics once threshold conditions (one-third negatives or incoherent audit progression) arise. Post audited success or tranche termination without further escalation, tranche_store.remove_block_history prunes transient state.
Error Handling & Edge Cases
| Scenario | Detection | Response |
|---|---|---|
| Timeout in tranche loop | asyncio.TimeoutError around assignment_wrs | Log warning, proceed to sleep remainder |
Empty selection (assigned_wrs == []) | Length check | Skip announcement/judgment cycle |
| Stale block (slot < finalized.slot) | Slot comparison | Abort audit run early |
| Invalid framing (length mismatch) | is_valid == False | Raise networking error, close stream |
| Judgment epoch drift > 1 | Epoch comparison | Reject judgment & log error |
| Missing tranche for judgment | fetch_rep_tranche returns None | Log error; ignore judgment |
| Malicious mass negatives | High false_votes proportion | Escalate via increased m_n, potential dispute extrinsic |
Performance Considerations
- Network fan-out: CE 144 & CE 145 broadcast to all connected peers; potential optimization: partial gossip with redundancy threshold.
- VRF computations: O(R) per tranche where R is remaining unaudited reports; truncated by early satisfaction of audited condition.
- Storage growth: bounded by per-block, per-tranche lifetimes; aggressive pruning after audited conclusion avoids unbounded history.
- Concurrency:
asyncio.create_taskused for non-blocking transmissions & state updates; judgments processed sequentially per assignment list for deterministic ordering.
Mapping Spec Equations to Code
| Gray Paper Reference | Code Location | Description |
|---|---|---|
| Eq. 17.3 / 17.4 | Audit.auditable_reports | Build q from rho & newly available set |
| Eq. 17.5 / 17.6 | Audit.verifiable_random_selection | Initial tranche assignment & shuffle |
| Eq. 17.14 / 17.15 | Audit.vrf_tranche | Conditional assignment logic for later tranches |
| Tranche index derivation | Audit.tranche_index | Time to tranche mapping |
| Audited predicate U | Utils.block_audited | Determine block completion |
| Negative escalation | Auditor.negative_judgments | Re-refinement & propagation |
Security Properties Achieved
- Liveness: VRF-based escalation ensures eventual audit coverage despite inactivity.
- Integrity: Multi-signature (Ed25519 + VRF evidence) announcements prevent unilateral reassignment manipulation.
- Accountability: Persistent records of announcements & judgments enable dispute construction.
- Resilience: >2/3 positive override path prevents censorship via isolated false negatives.
Future Improvements (Potential)
- Adaptive tranche duration under high contention.
- Partial aggregation of judgments to reduce bandwidth (multisig/BLS).
- Incremental Merkle commitments for faster dispute proofs.
- Rate limiting for redundant negative re-broadcasts.
Summary
We couple assurance output directly into the audit pipeline, using slot-derived timestamps and fixed tranche periods to regulate scheduling. The auditable_reports function enforces core alignment and selects only freshly available work-reports, while the asynchronous audit engine tracks judgments, no-shows, and negatives contributing to finality prerequisites.
This document details the full lifecycle and implementation fidelity of Tessera's auditing and judgment process, mapping JAM Gray Paper Section 17 semantics directly to production Python code for clarity and maintainability.