Forensic schedule analysts have a problem that software can solve. Given a stack of monthly schedule updates and a delayed milestone, the question is always the same: what happened, when, and why? Today that analysis is largely manual. An analyst opens each update in P6, eyeballs the critical path, compares activities across windows, and writes a narrative. Two analysts looking at the same data can reach different conclusions. WOET is our answer to that.
What Is WOET?
WOET stands for Window-Observed Execution Timeline. It takes a sequence of schedule updates, analyzes the CPM network state at each one, and produces a day-by-day classification of every calendar day from notice to proceed through completion (or the latest data date). Each day gets a tag that describes what was happening on the driving path at that moment in time: productive work, extended duration, or one of several categories of void time.
The output is a continuous timeline, built from contemporaneous observations, where every classification traces back to a specific schedule update, a specific CPM network state, and a specific set of float values. No judgment calls. No heuristics about which concurrent activity is "primary." The math decides.
Where It Sits in the Industry
The Association for the Advancement of Cost Engineering (AACE) publishes Recommended Practices for delay analysis under its Forensic Schedule Analysis framework. The most commonly referenced are the Methodological Implementation Protocols (MIPs):
MIP 3.4 is widely regarded as producing the most defensible results because it uses the schedule state that existed at the time, not a retroactive re-analysis. The challenge has always been execution: performing true contemporaneous-period APAB across dozens of updates, with consistent driving path analysis at each window, is extraordinarily labor-intensive by hand.
WOET automates the entire pipeline. Every schedule update is analyzed using its own network state. Every window comparison uses float values computed from that snapshot. The analyst's role shifts from manual data gathering to interpretation of results.
The Day-by-Day Timeline
WOET classifies every calendar day on the driving path into one of four primary categories:
Void days are further sub-classified by cause. WOET uses the relationship float data from our analysis engine to determine exactly why nothing was earning on the driving path:
Conceptual WOET timeline for a single milestone
Why Deterministic Matters
Forensic delay analysis lives in disputed territory. Opposing experts present competing narratives about the same project, often using the same schedule data. The disagreement usually traces back to subjective choices: which activity is "primary" when several finish on the same day, how to handle concurrent delays, whether to trust P6's stored float values or recalculate them.
WOET removes those choices. Four properties make the output repeatable:
Same inputs, same outputs. Every time. Two analysts running WOET on the same set of XER files will produce identical timelines. The discussion moves from "what does the data show?" to "what does the data mean?" That second question is where expert judgment belongs.
Built-In Reconciliation
A timeline is only useful if you can verify it. WOET enforces two reconciliation identities that must hold for every milestone:
When a reconciliation check fails, it flags data quality issues in the underlying schedule updates. That's a feature: rather than hiding bad data behind plausible-looking output, WOET surfaces it explicitly.
How It Differs from Traditional APAB
Global As-Planned vs As-Built compares two endpoints: the baseline and the final schedule. It's a valid methodology, but it has known weaknesses when applied to large, complex projects. WOET addresses the most significant ones:
| CHALLENGE | GLOBAL APAB | WOET |
|---|---|---|
| Anchor placement | Binary anchor across entire project lifetime | Per-window anchor scoped to ~30 days |
| Float source | Final schedule's CPM, applied retroactively | Each window's own network snapshot |
| Concurrent activities | Influence test picks one "primary" | All longest-path activities are co-driving |
| Void classification | Analyst judgment | Direct mapping from computed float signals |
| Observation density | Two points (baseline and final) | One observation per schedule update |
There's one more difference worth calling out: scope. A manual APAB analysis targets a single milestone. If the dispute involves five milestones, the analyst does the work five times. Ten milestones, ten times. The effort scales linearly with the number of milestones and the number of update windows. WOET produces a timeline for any milestone you point it at, and the cost of adding another is essentially zero. Run every milestone in a project in seconds, not weeks.
None of this makes global APAB wrong. For many projects and many questions, it's perfectly adequate. WOET supports both approaches: users and agents can choose which schedule updates to include in the analysis. Select just the baseline and final update and you get traditional two-endpoint APAB. Include every monthly update and you get full contemporaneous-period resolution. It's the same engine either way.
Query It with Your AI Agent
Everything WOET computes is available through our MCP integration. That means any AI agent that speaks the Model Context Protocol can pull WOET timelines, drill into specific windows, check reconciliation quality, and trace individual day classifications back to their source data. Ask your agent "what caused the delay between March and June on Milestone 12?" and it can walk the timeline, identify the void blocks, and tell you which activities and relationships produced them.
This is where deterministic output really pays off. Because every classification is traceable and every reconciliation check is verifiable, an AI agent can confidently navigate the data without hallucinating conclusions. It's not generating a narrative from vibes. It's reading structured, auditable results and reporting what the math says.
What's Next
WOET is in active development. The core computation engine is working and producing timelines on real project data. We're currently focused on the quality pipeline: reconciliation reporting, data quality scoring, and the visual timeline presentation that will make the results accessible to analysts and attorneys alike.
If you work in forensic schedule analysis and you're tired of rebuilding the same manual APAB comparison across dozens of updates, we'd love to hear from you. WOET is the kind of feature that benefits enormously from practitioner feedback, and we're building it in the open.