New to this topic?
We recommend reading these guides first to get the most out of this one:
Touch
Labor Hours Only
20+
Units for Reliable Curve
±5%
Clean Data Accuracy
3–5
Common Data Problems

Touch Labor vs. Span: What to Measure

Learning curves model the improvement in direct labor effort. The fundamental unit of measurement is touch labor hours — the time a technician’s hands are on the product, performing value-added work. This is distinct from several other time measures that are frequently confused with it.

MetricIncludesUse for Learning Curves?
Touch laborDrilling, fastening, routing, wiring, bonding, testingYes — this is the correct input
Direct laborTouch labor + direct support (material handling, kit staging)Sometimes — if support ratio is stable
Earned hoursPlanned hours credited upon task completionNo — reflects the plan, not actuals
Span timeCalendar duration including wait, queue, and idle timeNo — non-labor factors dominate
Charged hoursHours booked to a work order (may include overhead, meetings)Only if overhead is stripped out

⚠️ Charge Number Contamination

In many factories, technicians charge time to the unit they are physically near, not the unit they are working on. Cross-charging, support labor booked to production charge numbers, and “bucket” work orders that pool hours across multiple units are the most common sources of data corruption. Always verify that hours are truly unit-specific before using them in a learning curve.

Normalizing for Rate Changes

Production rate affects hours per unit independent of learning. When a factory doubles its rate, it hires new workers, adds shifts, and stretches supervision thinner. These rate effects look like learning disruptions if you do not account for them. Conversely, a rate reduction can artificially improve hours per unit because experienced workers have more time per task.

📊 Rate Normalization Example Adjustment Method

Scenario: Production rate changes from 2 units/month to 4 units/month at unit 50.

Observed data: Units 45–49 average 4,800 hours. Units 51–55 average 5,200 hours. The learning curve predicts 4,650 hours at the midpoint.

Rate penalty: 5,200 – 4,650 = 550 hours attributable to rate change (workforce dilution, overtime, new hire inefficiency).

Normalization options:

  • Option A: Subtract 550 hours from units 51–55 and taper the adjustment over the next 20 units as the workforce stabilizes
  • Option B: Segment the curve — fit one curve to units 1–49 and a new curve starting at unit 50 with an adjusted T1
  • Option C: Include a rate variable in a multivariate regression (requires sufficient data at multiple rates)

Option A is most common for isolated rate changes. Option B is better when the rate change is permanent and large. Option C requires 30+ data points across multiple rate regimes.

Accounting for Disruptions

Disruptions break the learning pattern. A strike, pandemic, supply chain crisis, or major engineering change does not just add hours to the affected units — it can partially reset the learning curve for units that follow. The key is distinguishing between disrupted hours (which should be removed from the data) and learning loss (which is a real change to the curve going forward).

Disruption TypeTypical ImpactData Treatment
Production halt (< 3 months)5–15% hours increase for 3–8 units after restartFlag affected units, add disruption factor, do not exclude
Extended shutdown (3–12 months)15–30% increase, partial workforce turnoverSegment the curve; treat post-restart as a partial reset
Supply chain disruptionRework hours spike due to out-of-sequence workSeparate rework hours, normalize base hours to pre-disruption trend
Major engineering changeAffected operations reset to near-first-unit levelsDecompose by operation; reset only changed operations
Workforce turnover (> 30%)10–20% increase, gradual recovery over 10–20 unitsModel as a learning curve reset with a lower T1 than original

What Makes “Good” HPU Data

Good HPU data has five characteristics. If any of these are missing, the learning curve analysis will be unreliable and the resulting forecasts will carry hidden bias.

📊 Five Attributes of Quality HPU Data Checklist
AttributeDefinitionHow to Verify
Unit-specificHours are traceable to a single serial numberCheck charge number structure; confirm no pooled work orders
CompleteAll operations for the unit are closed and finalVerify no open work orders, no pending rework, all buy-off complete
ConsistentSame scope of work across all units in the datasetConfirm configuration, charge number definitions, and labor categories are stable
AccurateHours reflect actual work performed, not planned or estimatedCross-check with timekeeping system, compare to supervisor logs
TimelyData is current, with no retroactive adjustments pendingConfirm accounting period is closed, no pending cost transfers

Common Data Quality Problems

After reviewing HPU datasets from dozens of aerospace programs, these are the problems that appear most frequently. Each one biases the learning curve in a specific direction, and the biases do not cancel out — they compound.

ProblemSymptomBias DirectionFix
Cross-chargingHours scatter increases, some units inexplicably high or lowRandom — inflates variance, weakens R²Audit charge practices, reconcile with supervisor records
Incomplete units in datasetRecent units appear to have fewer hours (still in work)Understates recent hours, overstates learningOnly include fully completed units with all work orders closed
Configuration changes not trackedStep change in hours at a specific unit numberOverstates or understates learning depending on directionMap configuration changes to unit effectivity, normalize or segment
Overhead in touch laborAll units appear higher than expected, flat learningUnderstates learning rate (flatter curve)Strip non-touch hours (meetings, training, general support)
Retroactive cost transfersHistorical unit hours change between reporting periodsUnpredictable — corrupts trend analysisLock data after accounting close, use snapshot date

⚠️ The Incomplete Unit Trap

This is the single most common and most damaging data error. If your dataset includes units that are still in production, those units will show fewer hours than they will ultimately accumulate. The learning curve regression will interpret this as faster-than-actual learning, producing an optimistic forecast. Always verify that every unit in your dataset has completed all operations and all work orders are closed.

🎯 The Bottom Line

The quality of your learning curve analysis is limited by the quality of your data. Use touch labor hours, not span or charged hours. Normalize for rate changes before fitting a curve. Identify and treat disruptions explicitly rather than letting them corrupt the regression. Verify all five data quality attributes before starting analysis. The most common error — including incomplete units — produces systematically optimistic forecasts that can undermine proposals and EACs. Next: Unit & Cumulative Average Curves — how to plot, regress, and build confidence intervals on clean data.

🏭
Free Process Modeler
Map your production flow, find bottlenecks & optimize staffing. No login required.
Try It Free →
πŸ’Ύ
Save your learning progress PRO
Track quiz scores, earn badges, and pick up where you left off.
Upgrade →
Free forever · No credit card

Stop reading, start modeling

Model your process flow, run simulations, optimize staffing with TOC math, and test your knowledge with 107 interactive checks — all in one platform.

Open Workbench →