What Is the Standard Error of the Mean?
The standard deviation of the sample mean, commonly called the standard error of the mean (SEM), measures how much the sample mean is expected to vary from sample to sample. While the standard deviation measures the spread of individual observations, the SEM measures the precision of the sample mean as an estimate of the population mean.
The SEM decreases as sample size increases, reflecting the fact that larger samples provide more precise estimates. This relationship is described by the square root law: SEM = SD / sqrt(n). The SEM is fundamental to constructing confidence intervals and performing t-tests.
Formula
Relationship to Sample Size
| Sample Size | SEM (relative to SD) | 95% CI Width (relative) |
|---|---|---|
| 10 | 0.316 × SD | 1.24 × SD |
| 25 | 0.200 × SD | 0.78 × SD |
| 100 | 0.100 × SD | 0.39 × SD |
| 400 | 0.050 × SD | 0.20 × SD |
Frequently Asked Questions
Should I report SD or SEM?
Report SD when you want to describe the variability of individual observations in your sample. Report SEM when you want to indicate the precision of the sample mean. For error bars on graphs of means, SEM is appropriate if you want to suggest statistical significance, while SD shows the actual data spread.
Why does SEM decrease with larger samples?
With more data points, individual variations tend to cancel out, making the average more stable. The Central Limit Theorem guarantees that sample means are approximately normally distributed with spread SEM = SD/sqrt(n), regardless of the original distribution shape.