A scientific calculator is an extraordinary piece of engineering. It can solve differential equations, plot functions, and handle matrix operations that would take a human hours to work through by hand. And yet nobody confuses a calculator with a mathematician. The calculator executes. The mathematician knows what to ask.
We think about AI in forensic schedule analysis the same way.
The Data Is Already There
Consider a common investigation: detecting schedule manipulation. Logic changes that reroute the critical path away from delayed activities. Duration padding that absorbs float before a claim window. Constraint changes that artificially fix dates. The evidence for all of this exists in the schedule data. Every relationship added, removed, or modified is recorded across updates. Every duration change is measurable. Every constraint type is visible.
An AI agent with access to this data through tools like FPM's MCP integration can absolutely surface these patterns. It can compare logic networks across updates, flag activities whose relationships changed right before a claim period, trace how float shifted after a batch of revisions, and present it all clearly. The computational legwork that used to take days of manual comparison can happen in minutes.
That is genuinely powerful. It is also not the hard part.
What the Agent Cannot Do
The hard part is knowing what question to ask and what the answer means. Is a logic change manipulation, or is it a legitimate re-sequencing due to changed site conditions? Did float consumption happen because someone was gaming the schedule, or because an approved change order restructured the work? The data can tell you what changed. The analyst determines why it matters.
This is where domain expertise is irreplaceable. An experienced forensic analyst understands contract provisions, recognizes industry-standard scheduling practices, knows which patterns are suspicious versus routine, and can construct a narrative that holds up under cross-examination. That judgment layer sits on top of the data, and no amount of computational power substitutes for it.
The division of labor:
The tool computes CPM, traces paths, measures float, diffs schedules, quantifies delays, and apportions concurrency. Deterministic. Repeatable. Auditable.
The agent navigates the data, retrieves relevant results, compares across periods, spots patterns, summarizes findings, and formats output. Fast. Tireless. Consistent.
The analyst designs the investigation, interprets findings in contractual context, validates conclusions against industry practice, and builds the defensible narrative. Experienced. Accountable. Authoritative.
SOPs Over Buttons
There is a temptation to bake every investigation into a dedicated feature. A "Find Manipulation" button. A "Detect Gaming" workflow. And for well-defined, repeatable analyses, that makes sense. FPM already automates delay quantification, concurrency apportionment, and driving path analysis because those processes follow deterministic rules.
Investigative analysis is different. Each project has its own contract, its own history, its own cast of characters. The questions vary. "Did the GC restructure logic to hide an owner-caused delay?" is a fundamentally different investigation than "Was acceleration reflected in the schedule updates?" Both use the same underlying data. Both require different analytical approaches.
Rather than trying to encode every possible investigation into the software, the better play is standard operating procedures: structured guidance documents that tell the AI agent how to approach a specific type of investigation. Think of them as playbooks.
An SOP for detecting critical path manipulation might instruct the agent to:
- Pull the driving path to the contested milestone for each update window
- Diff the logic network between consecutive updates, filtering for relationship adds and deletes
- Identify activities that were on the driving path in one update and removed from it in the next
- Cross-reference logic changes with float changes on those activities
- Flag instances where logic changes coincide with delay events on the responsible party's scope
- Summarize findings with specific activity IDs, dates, and float values
The agent follows the procedure. The tools provide the data. The analyst reviews the output, filters out legitimate changes, and builds the case from what remains. Each layer does what it is good at.
Better Tools for Entrenched Analysts
We hear the concern from experienced practitioners: "Are you trying to replace us?" No. We are trying to give you a better calculator.
The analyst who has spent twenty years reading P6 schedules and testifying in depositions has something no AI can replicate: hard-won intuition about how schedules get gamed, which patterns indicate bad faith, and how to present findings to a trier of fact. What that analyst has traditionally lacked is the ability to process fifty schedule updates across thousands of activities without going blind from spreadsheet fatigue.
That is what changes. The tedious, mechanical parts of analysis become fast. The data retrieval that used to require hours of clicking through P6 filters happens in a conversation. The comparisons that required side-by-side printouts happen computationally. The analyst's time shifts from data wrangling to actual analysis. From assembly to interpretation.
The Parallel to Software Development
We see the same dynamic in our own work building FPM. AI is central to how we develop the product. Claude helps us write algorithms, debug edge cases, reason through CPM semantics, and maintain a complex codebase. It handles the low-level implementation. It is excellent at it.
What it cannot do is decide what to build next. It does not understand why a particular forensic methodology matters for a specific type of dispute. It cannot weigh the trade-offs between analytical depth and user experience. It needs guidance on architecture, on priorities, on what "correct" means in context. The developer still drives.
Forensic schedule analysis works the same way. The tools get smarter. The helper intelligence gets better. The speed increases. The objectivity improves. The domain expertise of the analyst remains the thing that makes the output meaningful.
Getting Started
If you are an analyst exploring how AI can fit into your practice, here is our recommendation: start with the investigations you already know how to do. Write down the steps. Be specific about what data you look at, what comparisons you make, what thresholds trigger concern. That document becomes your SOP. Hand it to an AI agent alongside the analytical tools, and watch it execute in minutes what used to take days.
Then refine. You will find that the agent surfaces things you would have missed manually because it does not get tired and it does not skip activities. You will also find that it flags false positives that your experience immediately dismisses. That is the collaboration working. The calculator crunches everything. The mathematician knows which results matter.
FPM's MCP server exposes the full analytical engine to any compatible AI client. The data is there. The tools are there. What makes the difference is the expertise you bring to the conversation.