What Is the Computed Mean and What Is the Actual Mean?
These two terms show up together in textbook problems where data has been summarized into a frequency distribution rather than listed out individually. They measure the same thing — the average — but they come from different sources, and that source gap is exactly what the comparison is testing.
The computed mean (also called the grouped mean or estimated mean) is calculated using the class intervals and frequencies in the table. Because individual values inside each interval are unknown, you substitute the midpoint of each class and treat it as representative of every value in that group.
The actual mean is the true arithmetic average calculated from every individual data value in the original, ungrouped dataset. In textbook problems, it is given to you so you can check how accurate your computed result turned out to be.
Computed mean = average estimated from grouped data using midpoints. Actual mean = precise average from all individual data points. The difference between them reflects how much information is lost when data is summarized into intervals.
🔑 Key Takeaways
The essential ideas you need before working through any computed-vs-actual-mean problem.
The computed mean uses midpoints as approximations. This is why it differs from the actual mean — midpoints rarely match the true center of each class's values.
The 5% threshold decides "close" vs "not close." If the difference is less than 5% of the actual mean, the means are considered close.
Only two of the four answer choices can ever be correct. Options pairing "close" with "> 5%" or "not close" with "< 5%" are logically contradictory and always wrong.
Larger samples reduce the gap. As sample size grows, the computed mean converges toward the actual mean — a direct consequence of the Law of Large Numbers.
The formula never changes. Σ(f × x) ÷ Σf works for every grouped dataset, regardless of topic — temperatures, speeds, test scores, or anything else.
What Is the Computed Mean?
When raw data is organized into a frequency distribution, the individual values inside each class interval are hidden. A temperature reading in the 45–49°F interval could be 45.1, 47.3, or 48.9 — the table does not say. To calculate any kind of average from this grouped format, you need a single representative value for each class.
That representative is the midpoint: the number exactly halfway between the lower and upper class limits. Using midpoints to estimate the mean is a standard technique in descriptive statistics, and the resulting value is the computed mean.
where x = class midpoint, f = class frequency
The computed mean gives you a workable estimate when only summarized data exists. It's used by meteorologists analyzing temperature range tables, traffic engineers working with speed distribution data, and census analysts working with income brackets — any situation where grouped summaries stand in for raw records.
How to Find the Midpoint of a Class
The midpoint formula is always the same:
For a class of 40–44, the midpoint is (40 + 44) / 2 = 42. For 45–49, it is (45 + 49) / 2 = 47. This pattern holds regardless of interval width.
What Is the Actual Mean?
The actual mean is the straight arithmetic average of every individual value in the dataset. It uses no shortcuts or approximations.
Sample Mean: x̄ = Σx / n
In textbook problems, the actual mean is provided as a benchmark. The exercise is asking: given that you only had the frequency table, how close did your estimate get to the truth? This is genuinely useful to know. If the computed mean regularly falls within a small margin of the actual mean, frequency distributions are a reliable summary tool for that type of data.
| Term | Symbol | Source | When Used |
|---|---|---|---|
| Computed Mean | x̄c | Frequency table using midpoints | When only grouped data is available |
| Actual Mean | x̄ or μ | Every individual data value | Given in problems as the reference point |
| Sample Mean | x̄ | Random sample of raw values | Most research and surveys |
| Population Mean | μ | Every member of the population | Census data, complete records |
How to Calculate the Computed Mean From a Frequency Distribution
Six steps. They apply to every grouped-data problem — temperature, speed, test scores, anything.
Read these directly from the frequency distribution table given in the problem.
Use: Midpoint = (Lower Limit + Upper Limit) / 2
Do this for every row in the table separately.
Add the results from every row into a single total.
Add up all frequency values. This is your total number of data points.
This is your final answer for part one of the problem.
Worked Example
Temperature Data — Frequency Distribution
Given frequency distribution of low temperatures (°F):
| Low Temp (°F) | Frequency (f) | Midpoint (x) | f × x |
|---|---|---|---|
| 40–44 | 3 | 42 | 126 |
| 45–49 | 4 | 47 | 188 |
| 50–54 | 9 | 52 | 468 |
| 55–59 | 5 | 57 | 285 |
| 60–64 | 2 | 62 | 124 |
| Total | 23 | — | 1,191 |
Computed Mean = 1,191 / 23 = 51.8°F
If the problem tells you the actual mean is 51.8°F, you already know the result before moving to the comparison step.
How to Compare the Computed Mean to the Actual Mean — The 5% Rule
This is the section that determines which answer choice you select. The rule used in Triola's Elementary Statistics (the most widely assigned introductory statistics textbook) treats the computed mean as "close" to the actual mean when their difference is less than 5% of the actual mean.
Run this calculation, then apply the decision:
The Four Answer Choices Decoded
Every version of this multiple-choice question uses the same four options. Two can never be correct. Here is what each one means and when to pick it:
Pick this when % difference is under 5%. The grouped data gave a good estimate.
Pick this when % difference is 5% or higher. The midpoint approximation introduced too much error.
Self-contradictory. A difference over 5% cannot make the means "close" by definition.
Self-contradictory. A difference under 5% cannot make the means "not close" by definition.
Students sometimes compute 5% of the computed mean instead of 5% of the actual mean. The rule always uses the actual mean as the reference. Double-check which number you divide by.
Three Practice Problems With Full Solutions
Practice Problem 1 — Close Result
Speed Data: Compare to Actual Mean of 46.6 mph
| Speed (mph) | Frequency (f) | Midpoint (x) | f × x |
|---|---|---|---|
| 42–45 | 26 | 43.5 | 1,131 |
| 46–49 | 13 | 47.5 | 617.5 |
| 50–53 | 7 | 51.5 | 360.5 |
| 54–57 | 3 | 55.5 | 166.5 |
| 58–61 | 1 | 59.5 | 59.5 |
| Total | 50 | — | 2,335 |
Step 1 — Computed Mean: 2,335 / 50 = 46.7 mph
Step 2 — Difference: |46.7 − 46.6| = 0.1
Step 3 — 5% of actual mean: 0.05 × 46.6 = 2.33
Step 4 — Compare: 0.1 < 2.33
% difference = 0.1 / 46.6 × 100 = 0.21%
The grouped speed data produced an estimate just 0.21% away from the true mean. That is a tight result.
Practice Problem 2 — Not Close Result
Temperature Data: Compare to Actual Mean of 57.2°F
| Low Temp (°F) | Frequency (f) | Midpoint (x) | f × x |
|---|---|---|---|
| 40–44 | 1 | 42 | 42 |
| 45–49 | 1 | 47 | 47 |
| 50–54 | 4 | 52 | 208 |
| 55–59 | 4 | 57 | 228 |
| 60–64 | 9 | 62 | 558 |
| Total | 19 | — | 1,083 |
Step 1 — Computed Mean: 1,083 / 19 = 57.0°F
Step 2 — Difference: |57.0 − 57.2| = 0.2
Step 3 — 5% of actual mean: 0.05 × 57.2 = 2.86
Step 4 — Compare: 0.2 < 2.86
% difference = 0.2 / 57.2 × 100 = 0.35%
Despite the data being skewed toward the higher temperature ranges, the midpoint method still produced a close estimate.
Practice Problem 3 — True Not-Close Result
Speed Data: Compare to Actual Mean of 50.9 mph
| Speed (mph) | Frequency (f) | Midpoint (x) | f × x |
|---|---|---|---|
| 42–45 | 29 | 43.5 | 1,261.5 |
| 46–49 | 13 | 47.5 | 617.5 |
| 50–53 | 6 | 51.5 | 309 |
| 54–57 | 4 | 55.5 | 222 |
| 58–61 | 1 | 59.5 | 59.5 |
| Total | 53 | — | 2,469.5 |
Step 1 — Computed Mean: 2,469.5 / 53 = 46.6 mph
Step 2 — Difference: |46.6 − 50.9| = 4.3
Step 3 — 5% of actual mean: 0.05 × 50.9 = 2.545
Step 4 — Compare: 4.3 > 2.545
% difference = 4.3 / 50.9 × 100 = 8.45%
The heavy skew toward lower speeds (29 out of 53 readings in the 42–45 range) pulled the midpoint estimate well below the true mean. The grouped data is a poor approximation here.
Why the Computed Mean Differs From the Actual Mean
The core reason is that midpoints are substitutions, not facts. When you assign the value 47 to every observation in the 45–49 interval, you are assuming each reading falls exactly at the center of that range. Real data rarely behaves that neatly.
If most of the values in a 45–49 temperature class actually cluster around 48 or 49, the midpoint of 47 underestimates the average for that group. Multiply that small error across multiple classes and it accumulates into a meaningful gap between computed and actual means.
Three factors drive how large the gap gets:
- Distribution shape — symmetric distributions produce smaller errors because values tend to balance around the midpoint. Skewed distributions accumulate error in the tail direction.
- Class interval width — wider intervals give midpoints less precision. Narrower intervals produce better estimates.
- Sample size — with more observations, midpoint errors tend to cancel out. This is why larger datasets produce computed means closer to the actual mean.
Where Grouped Data Means Show Up in Real Life
The computed-vs-actual comparison is not just a textbook exercise. Every field that reports data in ranges uses some version of this calculation, and the accuracy of the estimate has real consequences.
The U.S. Census Bureau publishes household income in $5,000 brackets. Analysts who need average income for a region must compute it from those brackets using midpoints — exactly the same method. The actual mean, if recoverable from raw tax records, serves as the accuracy check.
Traffic safety engineers work with speed distributions sorted into 5 mph intervals from radar guns. The computed mean of that distribution feeds directly into speed limit decisions and road design. An 8% error versus the actual mean could meaningfully change the recommendation.
Meteorologists report historical temperatures in ranges, not individual readings. Climate trend analysis relies on computing means from these grouped records, then validating against station averages (the actual mean) when those are available.
In each case, the 5% check is a practical quality test: is this summary close enough to trust, or does the grouping introduce too much distortion?
Self-Check Before Submitting Your Answer
- Did you find the midpoint of every class, not just use the lower limit?
- Did you multiply midpoint × frequency for each row separately, before summing?
- Did you divide by total frequency, not by the number of classes?
- Did you calculate % difference using the actual mean as the denominator, not the computed mean?
- Did you round the computed mean to the nearest tenth as directed?
- Did you check whether the answer options pair "close" with "< 5%" and "not close" with "> 5%"?
FAQs
The computed mean is estimated from grouped data using class midpoints. The actual mean is the precise arithmetic average calculated from every individual data value. The computed mean is an approximation; the actual mean is the truth.
When the absolute difference between them is less than 5% of the actual mean. Calculate % difference = |computed − actual| / actual × 100. A result under 5% means the means are close.
Because the individual values inside each class interval are unknown in grouped data. The midpoint is the single best estimate for the average of all values within that range, so it substitutes for the missing raw readings.
Any option that says "close because the difference is more than 5%" or "not close because the difference is less than 5%" is always logically wrong. Only "close because < 5%" and "not close because > 5%" can be correct.
Theoretically yes — if every value in each class interval happened to fall exactly at its midpoint. In practice this is rare, but for symmetric distributions with many observations, the difference can be extremely small.
Yes. Larger samples allow midpoint approximation errors to average out across more observations. This is a practical expression of the Law of Large Numbers, which guarantees that the sample mean converges toward the population mean as n increases.
Always 5% of the actual mean. The actual mean is the reference point — the thing you are checking your estimate against. Using the computed mean as the denominator would change the threshold and give you a wrong answer.