Table of Contents
What Is the F-Statistic?
The F-statistic is the ratio of two variances, used to test whether two populations have equal variances (F-test) or whether group means differ significantly (ANOVA). It follows the F-distribution, which is right-skewed and depends on two degrees of freedom parameters (numerator df1 and denominator df2).
Named after Ronald Fisher, the F-test is fundamental to analysis of variance, regression analysis, and comparing statistical models. An F-value significantly greater than 1 suggests that the variation between groups exceeds the variation within groups.
Formulas
Critical Values (alpha=0.05)
| df1 \ df2 | 10 | 20 | 30 | 60 |
|---|---|---|---|---|
| 1 | 4.96 | 4.35 | 4.17 | 4.00 |
| 2 | 4.10 | 3.49 | 3.32 | 3.15 |
| 5 | 3.33 | 2.71 | 2.53 | 2.37 |
| 10 | 2.98 | 2.35 | 2.16 | 1.99 |
ANOVA Context
In one-way ANOVA, the F-statistic compares between-group variance to within-group variance. A large F indicates that group means differ more than expected by chance. If F exceeds the critical value at the chosen significance level, we reject the null hypothesis that all group means are equal.
Frequently Asked Questions
Can F be less than 1?
Yes. F < 1 means the denominator variance is larger. In ANOVA, this means within-group variance exceeds between-group variance, strongly suggesting no significant group differences.
What is the relationship between F and t?
For comparing two groups, F = t². A two-sample t-test and one-way ANOVA with two groups give equivalent results. F generalizes to three or more groups.
What assumptions does the F-test require?
Independence of observations, normal distribution within groups, and homogeneity of variances (for ANOVA). Violations of normality are more serious with small samples. Welch's ANOVA can be used when variances are unequal.