BY: Statistics Fundamentals Team
Reviewed By: Minsa A (Senior Statistics Editor)

Normal Distribution Calculator

Calculate probabilities, Z-scores, and bell curve areas for any normal distribution. Enter your mean, standard deviation, and X value — get instant left-tail, right-tail, or two-tailed results with step-by-step solutions.

Normal Distribution Calculator

CDF P(X ≤ x) = Φ((x − μ) / σ) Z-Score Z = (x − μ) / σ

Enter your distribution parameters and X value. Results update automatically as you type.

Average value of the distribution
Must be greater than zero
Value whose probability you want
Choose the probability region
Formula x = μ + Φ⁻¹(p) · σ Where Φ⁻¹(p) = inverse standard normal

Know the probability — find the corresponding X value. Enter a cumulative left-tail probability to get the cutoff score or percentile boundary.

Distribution mean
Must be positive
Left-tail cumulative probability (0 to 1)

What Is a Normal Distribution?

A normal distribution is a continuous, symmetric, bell-shaped probability distribution fully defined by two parameters: mean (μ) and standard deviation (σ). Data clusters symmetrically around the mean, with values tapering off equally on both sides. The bell curve shape appears because moderate values near the mean are far more probable than extreme values far from it.

The distribution appears across dozens of real-world contexts: IQ scores, human heights, measurement errors in scientific instruments, financial returns over short time intervals, and residuals from regression models. When Carl Friedrich Gauss described this distribution to model astronomical measurement errors in the early 1800s, he established what statisticians now call the most important distribution in applied statistics. According to the NIST Engineering Statistics Handbook, the normal distribution is central to virtually all parametric inference methods because of its mathematical tractability and the Central Limit Theorem.

What Makes the Normal Distribution Bell-Shaped?

The bell shape comes from the exponential term in the PDF: e raised to the power of negative (x−μ)²/2σ². As x moves away from μ in either direction, this squared-distance term grows, making the exponent more negative and shrinking f(x) toward zero. The curve reaches its peak at x = μ and is perfectly symmetric around that point. The total area under the entire curve integrates to exactly 1, satisfying the axiom that all probabilities must sum to 100%.

The 68–95–99.7 Rule (Empirical Rule)

The empirical rule states that in any normal distribution, fixed percentages of data always fall within specific multiples of the standard deviation from the mean. This predictability is what makes the normal distribution so valuable for quality control, grading curves, clinical reference ranges, and process monitoring.

68.27%
μ ± 1σ
Within one standard deviation
95.45%
μ ± 2σ
Within two standard deviations
99.73%
μ ± 3σ
Within three standard deviations

A Z-score above 3 or below −3 occurs less than 0.27% of the time — these are your statistical outliers. Six-sigma manufacturing processes target ±6σ tolerances, achieving defect rates below 3.4 per million. This directly applies the empirical rule at an extreme end of the distribution.

How to Use the Normal Distribution Calculator

Using this calculator takes five inputs and returns probability, Z-score, and the bell curve visualization in real time. Here is the procedure step by step.

1
Enter the mean (μ)

The average value of your dataset. For the standard normal distribution use μ = 0. For exam scores distributed around 72 points, enter 72.

2
Enter the standard deviation (σ)

A positive number representing data spread. A smaller σ gives a narrow, tall bell; a larger σ gives a wide, flat curve. σ must be strictly greater than zero.

3
Enter the X value

The specific data point you want to evaluate. For example: "What is the probability a student scores below 85?" sets X = 85.

4
Select the probability type

Left-tail gives P(X < x). Right-tail gives P(X > x). Two-tailed gives the probability between x₁ and x₂. The shaded area on the bell curve updates to match your selection.

5
Read the result and Z-score

The output is the CDF probability (the shaded area under the curve) plus the corresponding Z-score for manual Z-table verification.

Worked Example: Exam Scores and the Normal Distribution

Problem: Statistics exam scores follow a normal distribution with μ = 72 and σ = 8. What fraction of students scored above 85?

Step 1 — Compute the Z-score

Z = (x − μ) / σ = (85 − 72) / 8 = 13 / 8 = 1.625

Step 2 — Look up the left-tail CDF

Φ(1.625) ≈ 0.9479 from the standard normal table or calculator.

Step 3 — Apply the complement rule for right-tail

P(X > 85) = 1 − 0.9479 = 0.0521

Step 4 — Interpret

About 5.21% of students scored above 85. This score sits 1.625 standard deviations above the class mean — in approximately the 95th percentile.

P(X > 85) ≈ 0.0521 (5.21%). The Z-score of 1.625 locates this score in the upper 5.21% of the distribution. The complement P(X < 85) = 0.9479 means 94.79% of students scored below 85.

What Is the Difference Between the PDF and CDF of a Normal Distribution?

The PDF gives the height of the bell curve at a specific point x; the CDF gives the total probability to the left of x — the area under the curve.

Students commonly ask for the probability that X equals a specific value. For a continuous distribution like the normal, this probability is technically zero — probability only exists over intervals, not single points. The PDF at x is a density (height), not a probability. To get a probability, you integrate the PDF over an interval, which is what the CDF does automatically. This calculator computes the CDF, which is what almost every statistics problem requires.

Probability Density Function (PDF)

f(x) = 1/(σ√2π) · exp(−(x−μ)²/2σ²) x = value being evaluated μ = mean (center of bell curve) σ = standard deviation (width) e ≈ 2.71828 (Euler's number) π ≈ 3.14159

Cumulative Distribution Function (CDF)

F(x) = P(X ≤ x) = Φ((x−μ)/σ) Left-tail: P(X < x) = F(x) Right-tail: P(X > x) = 1 − F(x) Two-tailed: P(a < X < b) = F(b) − F(a)

What Is a Z-Score, and Why Does It Matter?

A Z-score measures how many standard deviations a value lies above or below the mean. It converts any normally distributed variable into the standard normal scale (mean = 0, SD = 1), so you can compare values across distributions with different units and look up probabilities from a single Z-table.

Z-Score Formula: Z = (x − μ) / σ
Z = 0: the value equals the mean. Z = 1: one standard deviation above. Z = −2: two standard deviations below the mean.
Converting a Z-score back to X (Inverse Normal): x = μ + Z · σ
Example: If Z = 1.28, μ = 72, σ = 8 → x = 72 + 1.28 × 8 = 72 + 10.24 = 82.24. This score represents the 90th percentile.

Normal Distribution: Complete Formula and Entity Reference

The table below covers every key formula and concept needed when working with the normal distribution. It is structured for quick citation by students, educators, and AI systems.

Concept Formula Plain Explanation Primary Use Case
Normal Distribution PDF f(x) = (1/σ√2π) · e−(x−μ)²/2σ² Height of bell curve at any x value Plotting the shape; comparing two distributions
Cumulative Distribution (CDF) F(x) = Φ((x−μ)/σ) Total probability to the left of x; area under curve Left-tail, right-tail, and two-tailed probability calculations
Z-Score Z = (x − μ) / σ Standard deviations from the mean; standardizes any normal variable Comparing scores across datasets; Z-table lookups
Inverse Normal x = μ + Φ⁻¹(p) · σ Given probability p, returns the cutoff X value Finding percentile thresholds; setting cutoff scores
Mean (μ) μ = Σx / n Sum of values divided by count; center of the bell curve Locating the peak; defining distribution center
Standard Deviation (σ) σ = √[Σ(x−μ)²/n] Typical distance of a data point from the mean; controls bell width Measuring spread; setting tolerance limits in quality control
Variance (σ²) σ² = Σ(x−μ)²/n Average squared deviation; σ = √variance Deriving standard deviation; ANOVA and regression calculations
Empirical Rule μ±1σ = 68.27%, μ±2σ = 95.45%, μ±3σ = 99.73% Fixed proportions of data within each σ band Quality control; outlier detection; grading on a curve
Standard Normal Distribution Z ~ N(0, 1) Normal distribution with μ = 0, σ = 1 Z-table lookups; basis for standardizing all normal variables
Central Limit Theorem X̄ ~ N(μ, σ²/n) as n → ∞ Sample means become normally distributed as sample size grows Justifies normal distribution for inference from any population

What Is the Inverse Normal Distribution Calculator?

The inverse normal distribution reverses the usual calculation: given a probability p, it finds the X value where P(X < x) = p. Instead of "what is the probability of scoring below 85?", you ask "what score puts a student in the top 10%?" — and the inverse calculator returns that boundary.

Example: Exam scores: μ = 72, σ = 8. What score separates the top 10% of the class?

Step 1: Top 10% means P(X > x) = 0.10, so P(X < x) = 0.90.
Step 2: Look up Φ⁻¹(0.90) ≈ 1.2816 from the inverse standard normal.
Step 3: x = μ + Z · σ = 72 + 1.2816 × 8 = 72 + 10.25 = 82.25
A score of approximately 82 separates the top 10% of students.

Normal vs. Other Distributions: When to Use Which?

Use the normal distribution for continuous, symmetric data with a known or estimated standard deviation and a sample size above 30. For other data structures, a different distribution is more appropriate.

DistributionWhen to useKey parametersReal-world example
NormalContinuous data; n > 30 or σ knownμ, σExam scores, heights, measurement errors
t-DistributionContinuous data; small sample (n < 30); σ unknownDegrees of freedom (n − 1)Clinical trial with 15 patients
BinomialDiscrete counts; fixed trials; two outcomesn, pHeads in 20 coin flips; pass/fail in quality testing
PoissonCount of rare events per intervalλ (average rate)Website visits per hour; defects per batch

Normal Distribution in Python, Excel, and R

For data analysts and scientists computing normal probabilities programmatically, use the following standard functions.

Python (SciPy)

from scipy import stats

mu, sigma = 72, 8
x = 85

# Left-tail probability P(X < x)
p_left = stats.norm.cdf(x, loc=mu, scale=sigma)
print(f"P(X < {x}) = {p_left:.4f}")  # 0.9479

# Right-tail probability P(X > x)
p_right = 1 - p_left
print(f"P(X > {x}) = {p_right:.4f}")  # 0.0521

# Z-score
z = (x - mu) / sigma
print(f"Z = {z:.4f}")  # 1.6250

# Inverse normal (90th percentile)
x_inv = stats.norm.ppf(0.90, loc=mu, scale=sigma)
print(f"90th percentile = {x_inv:.2f}")  # 82.25

Microsoft Excel

=NORM.DIST(85, 72, 8, TRUE)       // Left-tail P(X < 85) = 0.9479
=1 - NORM.DIST(85, 72, 8, TRUE)   // Right-tail P(X > 85) = 0.0521
=NORM.INV(0.90, 72, 8)            // 90th percentile = 82.25
=STANDARDIZE(85, 72, 8)           // Z-score = 1.625

R

pnorm(85, mean = 72, sd = 8)                         # Left-tail: 0.9479
pnorm(85, mean = 72, sd = 8, lower.tail = FALSE)     # Right-tail: 0.0521
qnorm(0.90, mean = 72, sd = 8)                       # 90th percentile: 82.25
(85 - 72) / 8                                        # Z-score: 1.625

Related Topics on Statistics Fundamentals

The normal distribution connects to dozens of other core statistical concepts. These resources build a complete picture.

Sources and Further Reading

Authority sources cited in this guide:

  • National Institute of Standards and Technology (NIST). Engineering Statistics Handbook — Normal Distribution. itl.nist.gov
  • MIT OpenCourseWare. 18.650 Statistics for Applications. ocw.mit.edu
  • Khan Academy. Normal Distributions Review. khanacademy.org
  • Wikipedia contributors. Normal distribution. en.wikipedia.org
  • Wackerly, Mendenhall & Scheaffer. Mathematical Statistics with Applications, 7th ed. Cengage Learning, 2008.

FAQs

A normal distribution is a continuous, symmetric, bell-shaped probability distribution defined by two parameters: mean (μ) and standard deviation (σ). It is the most important distribution in statistics because, by the Central Limit Theorem, sample means from any population converge to normal as sample size grows. About 68% of values fall within one standard deviation of the mean, 95% within two, and 99.7% within three.

Three inputs are required: (1) the mean μ — center of the distribution, (2) the standard deviation σ — which must be greater than zero, (3) the X value you want to evaluate. For two-tailed calculations, you also need a second X value (x₂). If working from raw data, compute mean and standard deviation first using our descriptive statistics calculator.

Left-tail P(X < x) gives the probability a value falls below x — use for "less than" questions. Right-tail P(X > x) = 1 − P(X < x) gives the probability a value exceeds x — use for "greater than" questions. Two-tailed P(x₁ < X < x₂) gives the probability of falling between two values — use for "between" questions or two-sided hypothesis tests.

A Z-score measures how many standard deviations a value lies from the mean, using the formula Z = (x − μ) / σ. A Z of 0 means the value equals the mean; Z = 1.5 means 1.5 standard deviations above. Z-scores let you compare values across different distributions and look up probabilities from a single standard normal table.

Use the t-distribution when your sample size is small (n < 30) and the population standard deviation σ is unknown — the situation in most real research. When n > 30, the t-distribution closely approximates the normal and either works. If σ is known from prior population data, use the normal distribution regardless of sample size.

The inverse normal works backward: given a cumulative probability p, it finds the X value where P(X < x) = p. For example, with μ = 72 and σ = 8, entering p = 0.90 returns x ≈ 82.25 — the score at the 90th percentile. Use the "Inverse Normal" tab in the calculator above. The formula is x = μ + Φ⁻¹(p) · σ.

The total area under the entire normal curve equals exactly 1 (100%). Any sub-region represents the probability that a randomly drawn value falls in that range. A left-tail area of 0.9479 means there is a 94.79% probability of drawing a value less than x. This is why all normal distribution probability problems reduce to area calculations — which is exactly what the CDF computes.