Touch Labor vs. Span: What to Measure
Learning curves model the improvement in direct labor effort. The fundamental unit of measurement is touch labor hours — the time a technician’s hands are on the product, performing value-added work. This is distinct from several other time measures that are frequently confused with it.
| Metric | Includes | Use for Learning Curves? |
|---|---|---|
| Touch labor | Drilling, fastening, routing, wiring, bonding, testing | Yes — this is the correct input |
| Direct labor | Touch labor + direct support (material handling, kit staging) | Sometimes — if support ratio is stable |
| Earned hours | Planned hours credited upon task completion | No — reflects the plan, not actuals |
| Span time | Calendar duration including wait, queue, and idle time | No — non-labor factors dominate |
| Charged hours | Hours booked to a work order (may include overhead, meetings) | Only if overhead is stripped out |
⚠️ Charge Number Contamination
In many factories, technicians charge time to the unit they are physically near, not the unit they are working on. Cross-charging, support labor booked to production charge numbers, and “bucket” work orders that pool hours across multiple units are the most common sources of data corruption. Always verify that hours are truly unit-specific before using them in a learning curve.
Normalizing for Rate Changes
Production rate affects hours per unit independent of learning. When a factory doubles its rate, it hires new workers, adds shifts, and stretches supervision thinner. These rate effects look like learning disruptions if you do not account for them. Conversely, a rate reduction can artificially improve hours per unit because experienced workers have more time per task.
Scenario: Production rate changes from 2 units/month to 4 units/month at unit 50.
Observed data: Units 45–49 average 4,800 hours. Units 51–55 average 5,200 hours. The learning curve predicts 4,650 hours at the midpoint.
Rate penalty: 5,200 – 4,650 = 550 hours attributable to rate change (workforce dilution, overtime, new hire inefficiency).
Normalization options:
- Option A: Subtract 550 hours from units 51–55 and taper the adjustment over the next 20 units as the workforce stabilizes
- Option B: Segment the curve — fit one curve to units 1–49 and a new curve starting at unit 50 with an adjusted T1
- Option C: Include a rate variable in a multivariate regression (requires sufficient data at multiple rates)
Option A is most common for isolated rate changes. Option B is better when the rate change is permanent and large. Option C requires 30+ data points across multiple rate regimes.
Accounting for Disruptions
Disruptions break the learning pattern. A strike, pandemic, supply chain crisis, or major engineering change does not just add hours to the affected units — it can partially reset the learning curve for units that follow. The key is distinguishing between disrupted hours (which should be removed from the data) and learning loss (which is a real change to the curve going forward).
| Disruption Type | Typical Impact | Data Treatment |
|---|---|---|
| Production halt (< 3 months) | 5–15% hours increase for 3–8 units after restart | Flag affected units, add disruption factor, do not exclude |
| Extended shutdown (3–12 months) | 15–30% increase, partial workforce turnover | Segment the curve; treat post-restart as a partial reset |
| Supply chain disruption | Rework hours spike due to out-of-sequence work | Separate rework hours, normalize base hours to pre-disruption trend |
| Major engineering change | Affected operations reset to near-first-unit levels | Decompose by operation; reset only changed operations |
| Workforce turnover (> 30%) | 10–20% increase, gradual recovery over 10–20 units | Model as a learning curve reset with a lower T1 than original |
What Makes “Good” HPU Data
Good HPU data has five characteristics. If any of these are missing, the learning curve analysis will be unreliable and the resulting forecasts will carry hidden bias.
| Attribute | Definition | How to Verify |
|---|---|---|
| Unit-specific | Hours are traceable to a single serial number | Check charge number structure; confirm no pooled work orders |
| Complete | All operations for the unit are closed and final | Verify no open work orders, no pending rework, all buy-off complete |
| Consistent | Same scope of work across all units in the dataset | Confirm configuration, charge number definitions, and labor categories are stable |
| Accurate | Hours reflect actual work performed, not planned or estimated | Cross-check with timekeeping system, compare to supervisor logs |
| Timely | Data is current, with no retroactive adjustments pending | Confirm accounting period is closed, no pending cost transfers |
Common Data Quality Problems
After reviewing HPU datasets from dozens of aerospace programs, these are the problems that appear most frequently. Each one biases the learning curve in a specific direction, and the biases do not cancel out — they compound.
| Problem | Symptom | Bias Direction | Fix |
|---|---|---|---|
| Cross-charging | Hours scatter increases, some units inexplicably high or low | Random — inflates variance, weakens R² | Audit charge practices, reconcile with supervisor records |
| Incomplete units in dataset | Recent units appear to have fewer hours (still in work) | Understates recent hours, overstates learning | Only include fully completed units with all work orders closed |
| Configuration changes not tracked | Step change in hours at a specific unit number | Overstates or understates learning depending on direction | Map configuration changes to unit effectivity, normalize or segment |
| Overhead in touch labor | All units appear higher than expected, flat learning | Understates learning rate (flatter curve) | Strip non-touch hours (meetings, training, general support) |
| Retroactive cost transfers | Historical unit hours change between reporting periods | Unpredictable — corrupts trend analysis | Lock data after accounting close, use snapshot date |
⚠️ The Incomplete Unit Trap
This is the single most common and most damaging data error. If your dataset includes units that are still in production, those units will show fewer hours than they will ultimately accumulate. The learning curve regression will interpret this as faster-than-actual learning, producing an optimistic forecast. Always verify that every unit in your dataset has completed all operations and all work orders are closed.
🎯 The Bottom Line
The quality of your learning curve analysis is limited by the quality of your data. Use touch labor hours, not span or charged hours. Normalize for rate changes before fitting a curve. Identify and treat disruptions explicitly rather than letting them corrupt the regression. Verify all five data quality attributes before starting analysis. The most common error — including incomplete units — produces systematically optimistic forecasts that can undermine proposals and EACs. Next: Unit & Cumulative Average Curves — how to plot, regress, and build confidence intervals on clean data.
Stop reading, start modeling
Model your process flow, run simulations, optimize staffing with TOC math, and test your knowledge with 107 interactive checks — all in one platform.