How we track and classify predictions, and how updates are applied. This is a manual forecast-tracking system; no status is changed automatically.
Each prediction has exactly one of these statuses at any time.
The predicted event or outcome has occurred or clearly begun. We may record a "happened date" when applicable.
Mixed outcome: some aspects of the prediction have materialized, others have not. Used when a simple happened/contradicted split does not fit.
The prediction is still open. No conclusive event has occurred that would mark it as happened or contradicted. We track a probability (0–100%) for unresolved predictions.
A clear opposite or contradicting event has occurred. We do not mark a prediction contradicted merely because it has not happened yet; a positive contradiction is required. We record a "contradicted date" when set.
Visual guide (same across dashboard, table, and detail)
When we allow "contradicted" and when we do not.
A prediction is marked contradicted only when a clear opposite event has happened, not simply because time has passed or the event has not yet occurred. "Not yet decided" is the default for open predictions.
We never automatically convert unresolved predictions to contradicted. Every status change is applied manually by an operator, with full audit history. This keeps the data interpretable and avoids falsely counting "no news" as a miss.
What the 0–100% value represents for unresolved predictions.
For predictions with status not yet decided, we store a probability between 0 and 100. This represents our current assessment of how likely the prediction is to eventually be resolved as "happened" (or partially happened). It can be updated over time as new information arrives.
We display probability as a percentage (e.g. 65%). On the dashboard and accuracy view we also show percentages for the resolution breakdown (e.g. "Happened 23.5%", "Contradicted 11.8%") and for the hit rate (e.g. "72.1%"). Resolved predictions (happened, partially happened, contradicted) do not use the probability field for scoring; the Jiang hit rate and other KPIs are computed from resolved outcomes only, with unresolved items excluded.
Manual, human-in-the-loop process only.
This product is a manual forecast-tracking system. No status or probability is updated automatically from external feeds or other automated systems. Operators apply updates via:
prediction_current, insert into prediction_history and optionally prediction_sources)Every change is written to the audit table (prediction_history) so that chronology and notes are preserved. The public site is read-only; only authenticated admins can write.
We do not treat "no update" as a miss.
A prediction that remains not yet decided is exactly that: open. It is not considered a hit or a miss until it is explicitly resolved as happened, partially happened, or contradicted. Our hit rate and resolution metrics are defined so that unresolved items are excluded from the "hit rate" denominator and only resolved outcomes are counted. This avoids penalizing predictions that are still in play.
Recent additions to how we gather evidence, explain probability, and surface source framing.
How we find sources and propose updates; admins always approve or reject.
Admins can run Verify Predictions: we search the web (e.g. via Tavily) for articles related to each prediction, then use an LLM to summarize relevance and propose a status or probability update. The pipeline suggests a short description, change note, and which article URLs count as primary sources. Admins review each proposal and either approve (applying the update with one or more sources) or reject; nothing is written without human approval.
We bias search toward reports of actual occurrence: status changes (happened, partially happened, contradicted) are only suggested when sources report that the event has actually occurred (or that a clear opposite has occurred). If the articles only discuss possibility or speculation, we suggest a probability update only, not a status change. Optional "Smart verification" uses the LLM to select which predictions are worth verifying (e.g. skipping those too far in the future or already resolved).
Why a probability is what it is; each note can cite multiple sources.
When we update a prediction's probability, we store a change note and can attach one or more sources (URL, title, date, publisher). On the public site, hovering or tapping the probability shows the latest note; clicking opens a modal with all notes in order, each with its linked sources. This lets readers see the reasoning and evidence behind the current score. Admins can also attach sources when applying a probability-only update (e.g. from the "Update probabilities" flow that analyzes prediction history).
We label how sources frame the story; the platform stays neutral.
For each cited source we can run an AI-based analysis of editorial framing: whether the outlet or article leans toward a US/Israel-aligned narrative, an Iran-aligned narrative, is neutral, mixed, or unknown. This is about who is saying what and in whose favour, not whether the content is factually correct. We do not conflate source nationality with bias (e.g. Al Jazeera is not automatically "pro-Iran"; we analyze framing and emphasis).
On the predictions list and detail pages we show a compact bias strip (coloured dots per source and a lean meter). The meter is a horizontal bar: left = US/Israel-aligned, centre = neutral/mixed, right = Iran-aligned. Segments of 10% or more show a percentage label (e.g. "US/Israel: 33%"). Each dot is one source; hovering shows outlet name, country, and a short summary.
Color code (same on list and detail pages)
Example distribution (6 sources: 2 US/Israel, 1 neutral, 1 mixed, 2 Iran):
The detail page has a full "Source Analysis" section with per-source labels and short summaries. Bias analysis is optional and can be run manually from the admin editor. Our disclaimer: Bias analysis is AI-generated and may be imperfect; it reflects framing and emphasis, not factual accuracy.