Why Most Variance Narratives Fail
Every month, Control Account Managers across thousands of defense programs write variance narratives. The vast majority are useless. They restate the numbers in words — “The control account is $150K unfavorable due to higher-than-planned costs” — and call it analysis. That is not analysis. That is reading a spreadsheet aloud.
Effective variance analysis answers a fundamentally different question: why did performance deviate from plan, and what are we going to do about it? The narrative must trace the variance to a root cause, quantify its impact, and propose corrective action with a timeline. Anything less is administrative theater.
The monthly variance analysis rhythm is the heartbeat of EVMS. When done well, it provides program managers with the insight they need to make decisions before small problems become large ones. When done poorly, it generates paperwork that nobody reads.
CAR/VAR Thresholds and When They Trigger
Variance thresholds define when a control account requires formal written analysis. Most contracts specify thresholds in the Contract Data Requirements List (CDRL) or the program’s EVMS System Description. The standard approach uses a dual-condition test: both a percentage threshold and a dollar threshold must be breached.
| Threshold Type | Typical Value | Applies To | Trigger Logic |
|---|---|---|---|
| Cost Variance (CV) | ±10% AND ±$100K | Control Account cumulative | Both conditions must be met |
| Schedule Variance (SV) | ±10% AND ±$100K | Control Account cumulative | Both conditions must be met |
| VAC Threshold | ±10% of CA BAC | Variance at Completion | Single condition |
| Program-Level | ±5% of PMB | Total program CV or SV | Triggers escalation to customer |
When a control account breaches its threshold, the CAM must produce a Variance Analysis Report (VAR). If the variance is significant or persistent, the program may escalate to a Corrective Action Report (CAR), which adds formal commitments: specific actions, responsible parties, target completion dates, and follow-up tracking.
💡 Threshold Tiering Prevents Noise
A flat $100K threshold means a $50K control account never triggers formal analysis — even at 200% overrun — while a $10M control account triggers at just 1% deviation. Smart programs tier their thresholds: small CAs use tighter dollar thresholds ($25K), large CAs use looser ones ($250K). The goal is to capture meaningful variances without drowning CAMs in paperwork for rounding errors.
Segregating Variances: Rate, Usage, and Schedule
A cost variance of –$200K tells you almost nothing by itself. To understand the root cause, you must decompose it into its components. Every cost variance can be segregated into three categories:
| Component | Definition | Formula | Example Root Cause |
|---|---|---|---|
| Rate Variance | Actual rates differ from planned rates | (Actual Rate – Planned Rate) × Actual Quantity | Senior engineers assigned instead of mid-level; overtime premium; subcontractor rate increase |
| Usage Variance | More or fewer resources used than planned | (Actual Quantity – Planned Quantity) × Planned Rate | Rework required; scope growth; efficiency gain; learning curve |
| Schedule Variance | Work performed ahead of or behind the plan | EV – PV (time-based) | Late material delivery; resource unavailability; predecessor slip |
Scenario: Control Account 1.3.2 (Software Integration) shows CV = –$200K cumulative.
| Element | Planned | Actual | Delta |
|---|---|---|---|
| Labor Hours | 2,000 hrs | 2,400 hrs | +400 hrs (usage) |
| Labor Rate | $75/hr | $85/hr | +$10/hr (rate) |
| Planned Cost (PV) | $150,000 | — | — |
| Actual Cost (AC) | — | $204,000 | — |
Rate Variance: ($85 – $75) × 2,400 hrs = –$24,000 (10/hr premium on senior staff)
Usage Variance: (2,400 – 2,000) × $75 = –$30,000 (400 extra hours for rework)
Combined Explanation: $24K from rate (senior engineers substituted for mid-level) + $30K from usage (integration rework after interface change). The remaining variance traces to prior periods. Now management knows: the rate issue requires a staffing plan correction; the usage issue requires an interface control process fix.
Writing Narratives That Drive Action
Every variance narrative should follow a five-part structure. This is not optional — it is the difference between a narrative that gets read and one that gets filed.
| Section | Question Answered | Bad Example | Good Example |
|---|---|---|---|
| 1. State the Variance | What is the number? | “CV is unfavorable.” | “CV = –$200K (–13.3%) cumulative, –$45K current period.” |
| 2. Root Cause | Why did it happen? | “Higher than planned costs.” | “Two SR engineers replaced planned MID staff due to skill gap on radar interface; 400 hrs of rework from ICN-042 interface change.” |
| 3. Impact | What does it mean? | “May affect EAC.” | “Drives EAC increase of $180K; delays CDR entry by 2 weeks.” |
| 4. Corrective Action | What are we doing? | “Will monitor closely.” | “(a) Onboarding 2 MID engineers by 15 Mar; (b) Baselined interface freeze after ICN-042 closure.” |
| 5. Recovery Timeline | When will it improve? | “Expect improvement soon.” | “Current-period CV will return to ±5% by Apr; cumulative CV will stabilize at –$200K.” |
Notice the pattern: bad narratives use vague language and restate what the numbers already show. Good narratives name specific causes, quantify specific impacts, and commit to specific actions with dates. The program manager reading this narrative can make a decision. The program manager reading the bad version cannot.
The Monthly Variance Analysis Rhythm
Variance analysis is not a one-time event — it is a monthly discipline that follows a predictable cadence. The rhythm ensures that data flows from the accounting system through analysis to management action within a compressed timeline.
| Day | Activity | Owner |
|---|---|---|
| Day 1–3 | Monthly cost close: actuals posted, EV claimed, schedule statused | Finance / Scheduler / CAMs |
| Day 4–5 | Data validation: reconcile AC, EV, and schedule; resolve discrepancies | EVMS Analyst |
| Day 6–8 | CAMs write variance narratives for breached thresholds | CAMs |
| Day 9–10 | Program-level review: roll up narratives, identify systemic issues | PM / Program Controls |
| Day 11–15 | CPR/IPMR assembly and submission to customer | Program Controls |
The critical path runs through data quality. If actuals are not posted correctly by Day 3, every downstream activity slips. Programs that struggle with variance analysis almost always have an upstream data problem — not a narrative-writing problem.
⚠️ The “Will Monitor” Anti-Pattern
If your corrective action plan says “will continue to monitor,” you do not have a corrective action plan. Monitoring is not action — it is the absence of action. Every CAR must include at least one concrete step: reassign staff, reschedule work, descope, change the technical approach, or request a baseline change. If none of these apply, then either the variance is not significant enough to warrant a CAR, or the team has not yet identified the real problem.
🎯 The Bottom Line
Variance analysis is not about explaining numbers — it is about identifying root causes and driving corrective action. Segregate every variance into rate, usage, and schedule components to find the real problem. Write narratives that answer five questions: what, why, impact, action, and timeline. Follow the monthly rhythm religiously, and never accept “will monitor” as a corrective action. Next: Estimate at Completion Methods — translating today’s performance into tomorrow’s forecast.
Stop reading, start modeling
Model your process flow, run simulations, optimize staffing with TOC math, and test your knowledge with 107 interactive checks — all in one platform.