semiconductor.tools
Back to blog index
April 20, 20267 min readBy Lora Neumann

Resistor Tolerance Explained: What ±1% vs ±5% Actually Means for Your Circuit

# Resistor Tolerance Explained: What ±1% vs ±5% Actually Means for Your Circuit Every resistor datasheet lists a tolerance — usually ±1%, ±5%, or ±10%.

Every resistor datasheet lists a tolerance — usually ±1%, ±5%, or ±10%. Most engineers know that ±1% is "more precise" and ±5% is "less precise." But what does that actually mean for your circuit? When does spending extra for tight tolerance make a measurable difference, and when are you just burning money?

Let's break it down with real examples.

What Tolerance Actually Measures

Tolerance is the maximum deviation from the nominal resistance value at the time of manufacture. A 10 kΩ resistor with ±5% tolerance will measure between 9,500 Ω and 10,500 Ω when it leaves the factory.

A few things tolerance is not:

  • It's not a guarantee about drift over temperature or time
  • It's not a statement about noise or stability
  • It's not a quality metric — a cheap ±1% resistor can drift worse than a good ±5% one

Tolerance only describes the initial accuracy at 25°C, right out of the tape.

The Distribution Within Tolerance

Here's something most people don't think about: not every resistor in a ±5% batch measures exactly 5% off. The actual distribution is roughly Gaussian (bell curve), and most parts cluster near the nominal value.

In fact, many ±5% resistor series are manufactured using the same process as ±1% parts. The ones that land within ±1% get binned and sold as ±1%. The rest become ±5% (or ±10%).

This means if you buy ±5% resistors, you'll rarely see one at the actual ±5% edge. Most will be within ±2–3%. But you can't guarantee that, which is why the spec matters for worst-case analysis.

When Tolerance Matters (And When It Doesn't)

When It Doesn't Matter: Digital Pull-Ups and Pull-Downs

A 10 kΩ pull-up resistor on an I2C bus or a GPIO line? ±5% is perfectly fine. The exact resistance only needs to be in the right ballpark — somewhere between 1 kΩ and 100 kΩ would probably work. The ±500 Ω variation from a ±5% tolerance is irrelevant.

Same goes for:

  • LED current-limiting resistors (the LED itself has wide forward voltage variation)
  • Reset pull-ups
  • General-purpose GPIO pull-ups/downs
  • Any circuit where the resistance value isn't setting a precision voltage or current

When It Matters: Voltage Dividers

A voltage divider is where tolerance starts to bite. The output voltage depends on the ratio of two resistors, and if both drift in opposite directions, the error compounds.

Example: A voltage divider scaling 12V down to 3.3V for an ADC input.

Vout = Vin × R2 / (R1 + R2)

Target: Vout = 3.3V at Vin = 12V
Choose R1 = 8.2 kΩ, R2 = 3.3 kΩ (roughly)

Ideal: Vout = 12 × 3300 / (8200 + 3300) = 3.424V

With ±5% tolerance, worst case:

R1 at +5% = 8610 Ω, R2 at -5% = 3135 Ω
Vout = 12 × 3135 / (8610 + 3135) = 3.195V

That's a -6.7% error from ideal. 
At R1 at -5% = 7790 Ω, R2 at +5% = 3465 Ω:
Vout = 12 × 3465 / (7790 + 3465) = 3.710V

That's +8.4% error from ideal.

That 0.515V swing on a 3.3V signal is huge. If your ADC is using this divider to measure battery voltage, your readings are off by several hundred millivolts.

With ±1% resistors:

R1 at +1% = 8282 Ω, R2 at -1% = 3267 Ω
Vout = 12 × 3267 / (8282 + 3267) = 3.398V

Only -0.76% error. Much better.

For a 12-bit ADC with a 3.3V reference, the ±5% divider error translates to roughly 260 LSB of error. The ±1% version gives about 30 LSB of error. Night and day.

When It Really Matters: Current Sensing

Current-sense resistors are the most tolerance-sensitive application in typical circuits. If you're measuring current through a 0.1 Ω sense resistor, a ±5% tolerance means your measurement could be off by ±5 mA at 100 mA — or ±50 mA at 1 A.

For battery fuel gauges, motor current limiting, or power management, that kind of error is unacceptable. Use ±1% or better (±0.1% and ±0.5% are common for precision current sensing).

The math is simple and brutal:

Imeasured = Vsense / Rsense

If Rsense = 0.1 Ω ±5%:
  At 1A actual current, Vsense = 100 mV
  Rsense could be 0.095 to 0.105 Ω
  Imeasured = 100 mV / 0.095 = 1.053 A (or 952 mA)
  
That's ±5.3% error in your current reading — from the resistor alone.

When It's Critical: Gain-Setting Resistors

Op-amp gain is set by a resistor ratio. The classic non-inverting amplifier:

Gain = 1 + (Rf / Rin)

If you need a precise gain (say for a sensor signal chain), both Rf and Rin need tight tolerance. With ±5% parts, worst-case gain error is roughly ±10%. With ±1%, it's about ±2%.

For instrumentation and measurement circuits, ±0.1% resistors are common. They cost more ($0.10–0.50 vs $0.01 for ±5% in small quantities) but the precision is worth it.

Tolerance vs. Temperature Coefficient

Here's the thing that catches people: tolerance only covers initial accuracy. Over temperature, resistors drift. The temperature coefficient of resistance (TCR), usually specified in ppm/°C, tells you how much.

Resistor Type Typical TCR Tolerance Range
Thick film chip ±100 to ±200 ppm/°C ±1% to ±5%
Thin film chip ±25 to ±50 ppm/°C ±0.1% to ±1%
Metal film (through-hole) ±50 ppm/°C ±1%
Wirewound ±20 to ±50 ppm/°C ±1% to ±5%
Bulk metal foil (Vishay Z-foil) ±0.2 ppm/°C ±0.005%

A ±1% thin-film resistor with ±50 ppm/°C TCR, operating from -40°C to +85°C:

ΔT = 85 - (-40) = 125°C
Drift = 50 ppm/°C × 125°C = 6250 ppm = 0.625%

Total worst case: 1% + 0.625% = 1.625% from nominal

A ±5% thick-film resistor with ±200 ppm/°C over the same range:

Drift = 200 ppm/°C × 125°C = 25000 ppm = 2.5%
Total worst case: 5% + 2.5% = 7.5% from nominal

For a 10 kΩ resistor, that's the difference between being off by 163 Ω vs 750 Ω. If that resistor sets a time constant or bias point, the performance difference is real.

The Cost Question

Here's the honest truth about resistor pricing (approximate, 1K quantity, 0603 chip resistors):

Tolerance Typical Price Multiplier vs ±5%
±5% $0.002–0.005
±1% $0.003–0.008 1.5–2×
±0.5% $0.01–0.03 5–10×
±0.1% $0.03–0.10 15–30×
±0.01% $0.50–2.00 200–500×

The jump from ±5% to ±1% is cheap — often pennies per board. The jump from ±1% to ±0.1% is where it starts to hurt. And ±0.01% is for calibration labs and metrology, not consumer electronics.

Practical approach: Default to ±1% for everything. The cost difference is negligible, and it gives you flexibility to use the same resistor in precision or non-precision circuits. Reserve ±5% for obvious non-critical applications (pull-ups, LEDs) where you're buying in massive volume and every cent matters.

Worst-Case Analysis: How to Do It Right

For any circuit where tolerance affects performance, do a proper worst-case analysis:

  1. Identify the resistors that set the critical parameter (gain, voltage, current, timing)
  2. Calculate the parameter with each resistor at its tolerance extreme
  3. Use the combination that gives the worst result (this isn't always all at max or all at min — think about the math)
  4. Add temperature drift on top for the operating range
  5. Check if the total error is acceptable

For voltage dividers and gain stages, the worst case is usually when resistors drift in opposite directions (one high, one low). For summing circuits, it depends on the specific topology.

Not sure what resistance you're looking at? The Resistor Color Code Calculator decodes 4-band, 5-band, and 6-band resistors instantly — including tolerance bands.