BY: Statistics Fundamentals Team
Reviewed By: Minsa A (Senior Statistics Editor)

F Table: Critical Values, PDF Download & Complete Guide

The F table gives critical values of the F distribution for right-tailed hypothesis tests at α = 0.10, 0.05, and 0.01. Use it for ANOVA, regression analysis, and tests of equality of variance — with numerator df across the top and denominator df down the side.

What Is an F Table?

An F table lists critical values for right-tailed F-tests. Select the table for your significance level (α). Find numerator degrees of freedom (df1) in the columns and denominator degrees of freedom (df2) in the rows. The value at their intersection is the F critical value. If your calculated F-statistic equals or exceeds this value, the result is statistically significant and you reject the null hypothesis.

F Critical Value Calculator

F critical value =

Click any cell to highlight the critical value. All values are right-tail critical values Fα,(df1,df2). Columns = numerator df1, Rows = denominator df2.

What Is the F Distribution?

Definition

The F distribution is a ratio of two chi-squared distributions, each divided by its degrees of freedom. It is right-skewed, always positive, and defined by two parameters: numerator df (df1) and denominator df (df2). It is written as F(df1, df2).

Key Properties

F values are always ≥ 0. The distribution is right-skewed — not symmetric — so critical values are read from the right tail only. As df1 and df2 both grow large, the F distribution approaches normality.

Named After

The F distribution is named after Sir Ronald Fisher, who developed it in the 1920s. George W. Snedecor later tabulated it, which is why you sometimes see it called Snedecor's F distribution or the variance ratio distribution.

When to Use the F Table

The F table applies to four main test types. Each uses the same lookup process but computes degrees of freedom differently.

📊

One-Way ANOVA

Comparing three or more group means. df1 = k − 1 (k = groups), df2 = N − k (N = total observations). The most common F-table use case.

📈

Regression F-Test

Testing overall significance of a regression model. df1 = number of predictors (p), df2 = n − p − 1. A significant result means at least one predictor explains variance.

⚖️

Equality of Variances

Testing whether two populations have equal variances. df1 = n1 − 1, df2 = n2 − 1. For two-tailed tests, use α/2 to find each critical value.

🔲

Two-Way ANOVA

Testing two factors and their interaction simultaneously. Each effect — Factor A, Factor B, and A×B interaction — has its own df1, df2, and separate F critical value.

How to Read the F Table: Step-by-Step

Reading an F table is a four-step process. Once you know your significance level and both degrees of freedom, the table gives you the exact threshold for your decision.

1 Choose your significance level (α). The most common is α = 0.05. Select that table — or the corresponding tab above. Stricter research may use α = 0.01.
2 Find numerator df (df1) in the column headers. For ANOVA, df1 = k − 1 where k is the number of groups. For regression, df1 is the number of predictors. Columns run left to right.
3 Find denominator df (df2) in the left column (rows). For ANOVA, df2 = N − k. For regression, df2 = n − p − 1. Rows run top to bottom.
4 Read the critical value at the row/column intersection. If your calculated F-statistic ≥ this value, reject H₀ — the result is statistically significant. Example: F(3, 30) at α = 0.05 → critical value = 2.92.
What if my df is not listed? Use the next lower listed df value — this gives a slightly higher critical value, which is conservative and controls Type I error. For df2 > 120, use the ∞ row. Statistical software gives exact values for any df.

Degrees of Freedom Calculator for F-Tests

Select your test type, enter the required values, and get df1 and df2 instantly.

Worked Examples: Using the F Table

Three examples cover the most common F-table scenarios: comparing group means, testing a regression model, and checking variance equality.

Example 1One-Way ANOVA — Three Teaching Methods

Scenario: A researcher compares exam scores across three teaching methods, with 10 students per group (N = 30) at α = 0.05.

df1 = k − 1 = 3 − 1 = 2
Numerator df
df2 = N − k = 30 − 3 = 27
Denominator df
Fcrit = 3.35
From F(2, 27) at α = 0.05
Decision rule: If F-calculated ≥ 3.35, at least one teaching method produces significantly different exam scores. Follow up with Tukey's HSD to identify which pairs differ.

Example 2Multiple Regression F-Test — Predicting Sales

Scenario: A regression model has 4 predictors and 50 observations. Test overall model significance at α = 0.05.

df1 = p = 4
Number of predictors
df2 = n − p − 1 = 45
Error degrees of freedom
Fcrit ≈ 2.58
From F(4, 45) at α = 0.05
Decision rule: If the model F-statistic ≥ 2.58, at least one of the four predictors significantly explains the variation in sales. This appears in regression output as the overall model F-test.

Example 3Equality of Variance F-Test

Scenario: Test whether two production lines have equal variance. Line 1: n1 = 21, Line 2: n2 = 16. Use α = 0.05 (two-tailed, so use α/2 = 0.025 per tail).

df1 = n1 − 1 = 20
Numerator df
df2 = n2 − 1 = 15
Denominator df
Fcrit ≈ 2.76
F(20,15) at α = 0.025
Decision rule: Place the larger variance in the numerator. If F-calculated ≥ 2.76, conclude the production lines have significantly different variances at the 5% level.

F Table vs t-Table vs Chi-Square Table

Each table covers a different family of tests. The key distinction comes down to what you are comparing and how many groups are involved.

Feature F Table t-Table Chi-Square Table
Distribution shape Right-skewed Symmetric Right-skewed
Parameters df1 and df2 df only df only
Primary use ANOVA, regression Mean comparison (1–2 groups) Categorical data, goodness-of-fit
Always positive? Yes (≥ 0) No (both tails) Yes (≥ 0)
Relationship F(1,df) = t² t² = F when df1=1 F = χ²/df (ratio form)

Finding F Critical Values Without a Table

Statistical software gives exact F critical values for any df combination, which is especially useful when your df falls between the values listed in a printed table.

Excel

=F.INV.RT(0.05, 3, 27)
// Returns: 2.9604

=F.INV.RT(alpha, df1, df2)

Older Excel: use FINV(alpha, df1, df2)

R

qf(p = 0.95, df1 = 3, df2 = 27)
# Output: 2.960351

# Note: use 1−α for right tail

Use 1 − alpha as the probability argument

Python

from scipy import stats
stats.f.ppf(q=0.95,
  dfn=3, dfd=27)
# Output: 2.960351

scipy.stats.f.ppf gives the quantile function

Assumptions of the F-Test

Valid F-test results require three conditions. The F-test is reasonably robust to mild violations of normality with large samples, but violations of independence are more serious.

1. Independence

Observations within and between groups must be independent. Violating this assumption — such as repeated measures without correction — invalidates the standard F-test.

2. Normality

Each group's data should be approximately normal. The F-test is robust to mild non-normality when sample sizes are large, but check with a Q-Q plot or Shapiro-Wilk test for small samples.

3. Homogeneity of Variance

Group variances should be approximately equal (homoscedasticity). Test with Levene's test or Bartlett's test before running ANOVA. If violated, use Welch's ANOVA instead.

Common Mistakes When Using the F Table

1
Confusing df1 and df2. Columns hold numerator df, rows hold denominator df. Swapping them gives the wrong critical value every time. Always confirm which df comes from groups and which comes from sample size.
2
Using the wrong significance level table. If your study uses α = 0.01, reading from the α = 0.05 table gives a critical value that is too low — making it easier to reject H₀ incorrectly.
3
Stopping at the F-test result. A significant ANOVA F-test tells you that group means differ — not which groups differ. Always follow up with a post-hoc test (Tukey's HSD, Bonferroni, or Scheffé) to find the specific pairs.
4
Rounding df up instead of down. When your exact df is not in the table, round down to the next lower listed value. Rounding up gives a smaller critical value that increases Type I error risk.

F Table PDF — Free Download

Download printable F distribution tables in PDF format. Each version includes critical values for all standard significance levels and covers a range of degrees of freedom combinations.

F Table: Most Commonly Referenced Critical Values

These are the F critical values that appear most often in textbooks, homework problems, and published research at α = 0.05.

F(1, 30) 4.171 α = 0.05
F(2, 27) 3.354 α = 0.05
F(3, 30) 2.922 α = 0.05
F(4, 45) 2.579 α = 0.05
F(2, 30) 3.316 α = 0.05
F(2, 30) 5.390 α = 0.01
F(5, 60) 2.368 α = 0.05

F Distribution: Key Facts

3
Standard significance levels covered (α = 0.10, 0.05, 0.01)
2
Degrees of freedom parameters required (df1 numerator, df2 denominator)
1920s
Decade Ronald Fisher developed the F distribution
F=t²
Relationship when df1 = 1 (F-test and t-test are equivalent)
0
Minimum possible F value (always ≥ 0)

Frequently Asked Questions About the F Table

What is an F table?

An F table (F distribution table) is a statistical reference listing critical F values for right-tailed hypothesis tests. To use it, select the correct significance level (α), find numerator df1 in the columns and denominator df2 in the rows, and read the critical value at their intersection.

How do you read the F table?

Select the table for your significance level (α). Columns represent numerator df (df1) and rows represent denominator df (df2). The cell at the intersection of your df1 column and df2 row is the critical F value. Reject H₀ if F-calculated ≥ this value.

What is the F table used for in ANOVA?

In ANOVA, the F table gives the critical threshold for the variance ratio statistic. For one-way ANOVA: numerator df = k − 1 (number of groups minus 1) and denominator df = N − k (total observations minus number of groups). A significant result means at least one group mean differs — not which group or groups.

What is the F critical value at df1=3, df2=30, α=0.05?

F(3, 30) at α = 0.05 has a critical value of 2.92. Any calculated F-statistic equal to or greater than 2.92 is statistically significant at the 5% level. Use the calculator at the top of this page to look up any combination of df1, df2, and α.

What is the difference between α = 0.05 and α = 0.01 in the F table?

The α = 0.01 table has higher critical values, requiring a larger F-statistic to reject H₀. For example, F(2, 30) at α = 0.05 is 3.32, but at α = 0.01 it rises to 5.39. Stricter tests reduce Type I error (false positives) but make it harder to detect real effects.

What does a significant F-test mean?

A significant F-test means the between-group variance is large relative to within-group variance — indicating at least one group mean or predictor has a statistically significant effect. In ANOVA, follow up with post-hoc tests to identify which groups differ.

What if my degrees of freedom are not in the F table?

Round down to the next lower listed df value — this gives a slightly more conservative (higher) critical value, which controls Type I error. For df2 greater than 120, use the ∞ row. For exact values, use Excel's F.INV.RT(), R's qf(), or Python's scipy.stats.f.ppf().

Is F = t² always true?

Only when df1 = 1. F(1, df) equals t²(df), so both tests give the same p-value in that case. For ANOVA comparing three or more groups, the F-test has no equivalent in the t-distribution.

Understanding the F Table: What the Values Mean

What the α Level Means

Each table corresponds to one α value: the probability of rejecting H₀ when it is actually true (Type I error). α = 0.05 gives a 5% false positive rate. Smaller α reduces false positives but demands a larger F-statistic to reach significance.

Why Larger df2 Gives Smaller Critical Values

As denominator df grows, the F distribution tightens around its mean and the right tail shrinks. The critical value drops because you need less evidence to reach significance with more data — the within-group variance estimate becomes more reliable.

F-Statistic: What It Measures

The F-statistic is the ratio of between-group variance to within-group variance. An F close to 1 means group means are no more spread out than random chance predicts. A large F means groups differ more than expected — signaling a real effect.