Descriptive Statistics Standardization Normal Distribution 20 min read May 1, 2026
BY: Statistics Fundamentals Team
Reviewed By: Minsa A (Senior Statistics Editor)

Z-Scores Explained: Definition, Formula, and Step-by-Step Examples

A student scores 85 on one exam and 72 on another. Which performance was actually better? The raw numbers say 85 wins. But if the first test had an average of 90 and the second had an average of 60, the 72 is the stronger result — by a lot. That comparison is exactly what a z-score makes possible.

This guide covers the z-score formula in full, walks through worked examples with realistic numbers, explains how to read a z-table, and shows you when to use a z-score versus a t-score. The interactive calculator below lets you compute any z-score immediately.

What You'll Learn
  • ✓ The exact definition of a z-score and the formula in two forms (population and sample)
  • ✓ Three worked calculation examples — including a negative z-score
  • ✓ How to read a z-table for left-tail, right-tail, and two-tailed probabilities
  • ✓ The 68-95-99.7 empirical rule and how to convert z-scores to percentiles
  • ✓ Five real-world applications from outlier detection to machine learning
  • ✓ Python, R, and Excel code with runnable examples

What Is a Z-Score? (Simple Definition)

Definition — Standard Score
A z-score (also called a standard score or z-value) is a statistical measure that tells you how many standard deviations a data point lies above or below the mean of a distribution. A z-score of 0 means the value equals the mean; positive z-scores sit above the mean; negative z-scores sit below it.
z = (x − μ) / σ

Z-scores solve a practical problem: raw numbers from different distributions cannot be compared directly. A score of 85 in one class and 72 in another are on different scales. Converting both to z-scores puts them on the same universal scale — the standard normal distribution, which always has a mean of 0 and a standard deviation of 1. That common scale is what makes comparison possible.

The process of converting raw values to z-scores is called standardization or normalizing data. It is one of the most frequently used operations in statistics, data science, and machine learning. The descriptive statistics section at Statistics Fundamentals covers the mean and standard deviation — the two ingredients every z-score needs.

⚡ Quick Reference — Z-Score Key Facts
  • Formula: z = (x − μ) / σ, where x is the data point, μ is the population mean, σ is the standard deviation
  • Interpretation: z = 0 means exactly average; z = 1.0 means 1 SD above the mean (84th percentile)
  • Outlier rule: Values with |z| > 3 are considered statistical outliers (<0.3% of normally distributed data)
  • 68-95-99.7 Rule: ~68% of data falls within z = ±1; ~95% within z = ±2; ~99.7% within z = ±3
  • When to use z vs t: Use z-score when population SD (σ) is known and n > 30; use t-score otherwise

The Z-Score Formula (Population vs. Sample)

There are two versions of the z-score formula, depending on whether you are working with an individual data point or a sample mean. Both measure distance from the mean, but in different units.

Population Z-Score Formula: z = (x − μ) / σ

Population Z-Score — Individual Data Point
z = (x − μ) / σ
Use this when you know the full population mean and standard deviation
x = the individual raw score (data point) μ = population mean (mu) σ = population standard deviation (sigma) z = resulting standard score

This formula applies whenever you know the true population mean and standard deviation — for example, IQ scores (μ = 100, σ = 15), or standardized test scales where the testing body publishes the full population parameters.

Sample Z-Score Formula: z = (x̄ − μ) / (σ/√n)

Sample Mean Z-Score — Sampling Distribution
z = (x̄ − μ) / (σ / √n)
Use this when testing how unusual a sample mean is relative to the population
= sample mean μ = population mean σ = population standard deviation n = sample size σ/√n = standard error (SE)

The denominator σ/√n is the standard error — the standard deviation of the sampling distribution of sample means. As n increases, the standard error shrinks, meaning larger samples produce more precise estimates. This formula connects directly to sampling distributions and forms the foundation of the z-test in hypothesis testing. The theoretical justification for why this works comes from the Central Limit Theorem.

How to Calculate a Z-Score: Step-by-Step (3 Examples)

📋
Featured Snippet — 3-Step Calculation

Step 1: Identify your data point (x), the population mean (μ), and the standard deviation (σ). Step 2: Subtract the mean from the data point: x − μ. Step 3: Divide the result by the standard deviation: (x − μ) / σ. The result is your z-score.

Example 1 — Exam Score (Score Above the Mean)

Worked Example 1 — Positive Z-Score

A student scores 85 on a statistics exam. The class mean is μ = 70 and the standard deviation is σ = 10. What is the z-score?

1

Identify the values: x = 85, μ = 70, σ = 10

2

Subtract the mean: x − μ = 85 − 70 = 15

3

Divide by the standard deviation: z = 15 / 10 = 1.5

✓ z = 1.5. This student scored 1.5 standard deviations above the class mean, placing them at approximately the 93rd percentile. About 93 out of 100 students scored lower.

Example 2 — Height Comparison (Cross-Distribution Z-Scores)

Z-scores shine when comparing values from distributions with different means and spreads. Suppose you want to know whether a 6'1" man or a 5'8" woman is taller relative to their respective populations. The raw heights cannot be directly compared — but z-scores can.

Worked Example 2 — Cross-Group Comparison

Adult male heights: μ = 5'9" (69 in), σ = 3 in. Adult female heights: μ = 5'4" (64 in), σ = 2.5 in. Who stands farther from their group average — a 6'1" man or a 5'8" woman?

1

Man: z = (73 − 69) / 3 = 4 / 3 = 1.33

2

Woman: z = (68 − 64) / 2.5 = 4 / 2.5 = 1.60

✓ The woman's z-score (1.60) is higher than the man's (1.33). Despite being shorter in raw terms, she is taller relative to women than the man is relative to men. This is the key insight z-scores provide: a common yardstick across different scales.

Example 3 — Negative Z-Score (Score Below the Mean)

Worked Example 3 — Negative Z-Score

A factory produces bolts with a target diameter of μ = 10mm and σ = 0.5mm. A bolt measures 9.2mm. What is its z-score, and what does a negative result mean?

1

Identify the values: x = 9.2, μ = 10, σ = 0.5

2

Subtract the mean: 9.2 − 10 = −0.8

3

Divide by σ: z = −0.8 / 0.5 = −1.6

✓ z = −1.6. The bolt is 1.6 standard deviations below the target diameter. A negative z-score simply means the value falls below the mean — it is not inherently problematic until you define what range is acceptable. In quality control, ±3 is a common tolerance boundary.

🧮 Z-Score Calculator

How to Use the Z-Score Table (Standard Normal Table)

A z-table (also called a standard normal table) converts a z-score into a probability — specifically, the proportion of the distribution that falls at or below that z-score. The table describes the standard normal distribution, which has μ = 0 and σ = 1.

Reading a Z-Table: Left-Tail, Right-Tail, and Two-Tailed

Most standard z-tables give left-tail probabilities — the area under the normal curve to the left of the z-score, written P(Z ≤ z). To find right-tail or two-tailed probabilities, you apply simple arithmetic:

  • Left-tail (below z): Read directly from the table: P(Z ≤ z)
  • Right-tail (above z): 1 − P(Z ≤ z)
  • Two-tailed (outside ±z): 2 × P(Z > |z|) = 2 × [1 − P(Z ≤ |z|)]

Worked Example: Finding P(Z ≤ 1.25) = 0.8944

Z-Table Lookup — Step by Step

Find the probability that a randomly selected value from a standard normal distribution falls at or below z = 1.25.

1

Identify the row: The first two digits of 1.25 are 1.2 → find row labeled 1.2

2

Identify the column: The hundredths digit of 1.25 is 5 → find column labeled .05

3

Read the intersection: Row 1.2, Column .05 → 0.8944

✓ P(Z ≤ 1.25) = 0.8944. Approximately 89.44% of the data in a standard normal distribution falls at or below z = 1.25. The right-tail probability is 1 − 0.8944 = 0.1056, or about 10.56%.

The table below covers z-scores from 0.0 to 2.0 in 0.1 increments. For a downloadable full z-table, see the z-table reference page.

z .00.01.02.03.04 .05.06.07.08.09
0.0.5000.5040.5080.5120.5160.5199.5239.5279.5319.5359
0.1.5398.5438.5478.5517.5557.5596.5636.5675.5714.5753
0.2.5793.5832.5871.5910.5948.5987.6026.6064.6103.6141
0.3.6179.6217.6255.6293.6331.6368.6406.6443.6480.6517
0.4.6554.6591.6628.6664.6700.6736.6772.6808.6844.6879
0.5.6915.6950.6985.7019.7054.7088.7123.7157.7190.7224
0.6.7257.7291.7324.7357.7389.7422.7454.7486.7517.7549
0.7.7580.7611.7642.7673.7704.7734.7764.7794.7823.7852
0.8.7881.7910.7939.7967.7995.8023.8051.8078.8106.8133
0.9.8159.8186.8212.8238.8264.8289.8315.8340.8365.8389
1.0.8413.8438.8461.8485.8508.8531.8554.8577.8599.8621
1.1.8643.8665.8686.8708.8729.8749.8770.8790.8810.8830
1.2.8849.8869.8888.8907.8925.8944.8962.8980.8997.9015
1.3.9032.9049.9066.9082.9099.9115.9131.9147.9162.9177
1.4.9192.9207.9222.9236.9251.9265.9279.9292.9306.9319
1.5.9332.9345.9357.9370.9382.9394.9406.9418.9429.9441
1.6.9452.9463.9474.9484.9495.9505.9515.9525.9535.9545
1.7.9554.9564.9573.9582.9591.9599.9608.9616.9625.9633
1.8.9641.9649.9656.9664.9671.9678.9686.9693.9699.9706
1.9.9713.9719.9726.9732.9738.9744.9750.9756.9761.9767
2.0.9772.9778.9783.9788.9793.9798.9803.9808.9812.9817

Highlighted cell: P(Z ≤ 1.25) = 0.8944. For the complete table including negative z-scores, visit the Z-Table reference page.

Interpreting Z-Scores: The 68-95-99.7 Empirical Rule

For data that follow a normal distribution, z-scores map to predictable probability bands. These bands are captured in the empirical rule (also called the 68-95-99.7 rule), which specifies exactly how much data falls within 1, 2, and 3 standard deviations of the mean.

Normal Distribution — Z-Score Probability Bands

μ (z=0) −1σ +1σ −2σ +2σ 68% 95%

The darker central band covers ±1σ (68%). The full shaded area to ±2σ covers 95% of the distribution.

What Z-Score Ranges Mean in Practice

Z-Score Range % of Data Included Plain Interpretation
z ∈ [−1, +1] ~68% Data within 1 SD of the mean — the typical middle bulk
z ∈ [−2, +2] ~95% The vast majority of observations — standard "normal range" in many fields
z ∈ [−3, +3] ~99.7% Nearly all data — values outside this range are statistical outliers

Converting Z-Scores to Percentiles

A z-score maps directly to a percentile through the z-table. Here are the most useful conversions to memorize:

50th
z = 0.0
84th
z = 1.0
93rd
z = 1.5
97.7th
z = 2.0
99.9th
z = 3.0

For negative z-scores, use the symmetry of the normal distribution: the percentile for z = −1.0 is 100% − 84.13% = 15.87%. You do not need a separate table for negative values — only the complement calculation.

5 Real-World Applications of Z-Scores

Z-scores are not a classroom abstraction. They appear in quality control floors, hospital growth charts, trading algorithms, and machine learning pipelines. Here are five concrete use cases.

🔍

1. Outlier Detection

Any observation with |z| > 3 falls more than 3 standard deviations from the mean — a threshold that flags fewer than 0.3% of normally distributed data as potential outliers.

🧪

2. Hypothesis Testing

The z-test computes a z-score for a sample mean and compares it to critical values (±1.645 for 90% CI, ±1.96 for 95%). See the hypothesis testing guide.

🤖

3. Feature Scaling in ML

Standardization — converting features to z-scores — is a preprocessing step for algorithms like k-NN, SVM, and logistic regression that are sensitive to feature magnitude differences.

📊

4. SAT vs. ACT Comparison

A 1450 SAT score and a 32 ACT score are on incompatible scales. Converting each to a z-score using national norms produces a direct, fair comparison of performance.

🏥

5. Clinical Growth Charts

Pediatric growth charts use z-scores (called Z-scores or SD scores) to classify children's height and weight relative to reference populations. A z-score below −2 signals undernutrition in WHO standards.

Outlier Detection: The Beyond ±3 Rule

In a normal distribution, only 0.27% of data falls beyond ±3 standard deviations. That rarity makes |z| > 3 a practical outlier threshold in many fields. In quality control, this connects to the Six Sigma framework, which targets defect rates at ±6 standard deviations — a proportion so small it is measured in parts per million. For data cleaning in Python or R, the ±3 rule is a first-pass filter before more sophisticated outlier tests.

Hypothesis Testing: Z-Tests and P-Values

When a research question asks whether a sample mean is significantly different from a known population mean, the z-test converts the sample result into a z-score and compares it to a critical value. For a two-tailed test at α = 0.05, the critical value is ±1.96 — if the computed z falls outside that range, the result is statistically significant and you reject the null hypothesis. The p-value is then read from the z-table as the tail probability beyond the observed z. Full treatment of this procedure is in the hypothesis testing guide and the statistics and probability section.

Z-Score vs. T-Score: Key Differences Explained

The z-score and t-score (t-statistic) both measure how far a value falls from a mean, but they apply in different situations. The choice between them comes down to what you know about the population and how many observations you have collected.

Feature Z-Score T-Score (T-Statistic)
Formula z = (x − μ) / σ t = (x̄ − μ) / (s / √n)
σ known? Yes — uses population σ No — uses sample s instead
Sample size Reliable for n > 30 Necessary when n ≤ 30
Distribution Standard normal (μ=0, σ=1) t-distribution with df = n−1
Tail behavior Thinner tails Heavier tails — wider CIs for same confidence
When to use σ known, or n > 30 σ unknown and n ≤ 30
Critical value (95%) ±1.96 (fixed) Varies by df; approaches ±1.96 as n → ∞
ℹ️
Decision Rule

Use the z-score when the population standard deviation σ is known, or when n > 30 (sample is large enough that s is a reliable substitute for σ). Use the t-score — and the t-distribution table — when σ is unknown and n ≤ 30. As n grows large, the t-distribution converges to the normal distribution, so the distinction disappears for large samples.

Calculating Z-Scores in Python, R, and Excel

All three tools support z-score calculation directly, either through built-in functions or with a one-line formula. The examples below are runnable.

Python: scipy.stats.zscore() and Manual Calculation

# Method 1: Manual calculation — most transparent x = 85 mu = 70 sigma = 10 z = (x - mu) / sigma print(f"z-score: {z}") # z-score: 1.5 # Method 2: scipy for an entire array from scipy import stats import numpy as np data = np.array([72, 78, 85, 62, 90, 55, 88]) z_scores = stats.zscore(data) print(z_scores) # array of standardized scores # Method 3: Find the percentile from a z-score percentile = stats.norm.cdf(1.5) * 100 print(f"Percentile: {percentile:.2f}%") # Percentile: 93.32%

Excel: STANDARDIZE() Function Walkthrough

Excel's STANDARDIZE() function takes three arguments: the raw value, the mean, and the standard deviation. To compute a z-score for a value in cell A2 with mean in B2 and SD in C2:

// Excel formula — enter in any cell =STANDARDIZE(A2, B2, C2) // To get the percentile directly: =NORM.S.DIST(STANDARDIZE(A2, B2, C2), TRUE) // Example: x=85, mean=70, SD=10 → returns 1.5 =STANDARDIZE(85, 70, 10) // Result: 1.5

R: scale() Function and Manual Method

# Manual calculation x <- 85; mu <- 70; sigma <- 10 z <- (x - mu) / sigma cat("z-score:", z, "\n") # z-score: 1.5 # scale() standardizes entire vectors data <- c(72, 78, 85, 62, 90, 55, 88) z_scores <- scale(data) # uses sample mean and SD print(z_scores) # Get percentile from z-score pnorm(1.5) * 100 # returns 93.32%

5 Common Z-Score Mistakes (And How to Avoid Them)

# Mistake Correct Approach
1 Applying z-scores to heavily skewed or bimodal data and treating them as percentiles The 68-95-99.7 rule only holds for normal distributions. Use Chebyshev's inequality for non-normal data, or verify normality first.
2 Using the population formula (σ) when only the sample SD (s) is available If σ is unknown, use the t-statistic instead of the z-score, especially for n ≤ 30.
3 Reading a left-tail z-table for a right-tail probability without subtracting from 1 P(Z > z) = 1 − P(Z ≤ z). Always confirm which tail your table gives before reading a probability.
4 Confusing a negative z-score with a "bad" or erroneous result A negative z-score simply means the value is below the mean. In some contexts (e.g., medical test error rates), lower is better.
5 Mixing up the Altman Z-score (finance) with the statistical z-score The Altman Z-score (Z = 1.2T₁ + 1.4T₂ + 3.3T₃ + 0.6T₄ + T₅) is a bankruptcy-prediction model, unrelated to the statistical standard score.

Frequently Asked Questions About Z-Scores

Z-Score Quick Reference Cheat Sheet

The table below summarizes every key term, formula, and boundary condition covered in this guide. It is structured for maximum LLM parsability and serves as a concise study reference.

Term / Entity Formula / Value When to Use Plain Interpretation
Z-Score (Population) z = (x − μ) / σ σ known; individual data point Standard deviations from population mean
Z-Score (Sample Mean) z = (x̄ − μ) / (σ/√n) Testing how unusual a sample mean is Standard errors the sample mean is from μ
Standard Normal Distribution μ = 0, σ = 1 Always — z-scores map onto this distribution Universal scale all z-scores share
Empirical Rule ±1→68% | ±2→95% | ±3→99.7% Normal distributions only Proportion of data within each SD band
Outlier Threshold |z| > 3 Flagging unusual values Less than 0.3% of normal data falls here
Critical Z-Values (CI) 90%: ±1.645 | 95%: ±1.960 | 99%: ±2.576 Confidence intervals and hypothesis tests Z-score thresholds for significance
Percentile Conversion P(Z ≤ z) from z-table Comparing an individual to a population z=1.0 → 84th percentile; z=2.0 → 97.7th
T-Statistic (contrast) t = (x̄ − μ) / (s/√n) σ unknown; n ≤ 30 Uses sample SD — heavier tails than z
Chebyshev (non-normal) At least 1 − 1/k² within ±k SDs Any distribution regardless of shape ≥75% within ±2; ≥89% within ±3

Continue Learning at Statistics Fundamentals

Explore Related Topics

Z-scores connect to a broader ecosystem of statistical concepts. The guides below cover the prerequisite and downstream topics in the natural reading order.

External References