Table of Contents
What Is Standard Deviation?
Standard deviation is the most widely used measure of statistical dispersion. It quantifies the amount of variation or spread in a dataset. A low standard deviation indicates that data points tend to be close to the mean, while a high standard deviation indicates that data points are spread out over a wider range of values.
Standard deviation is expressed in the same units as the original data, making it more interpretable than variance (which is in squared units). It is used extensively in science, engineering, finance, quality control, and social sciences for summarizing data variability, constructing confidence intervals, and performing hypothesis tests.
Formulas
Empirical Rule (68-95-99.7)
| Range | Percentage (Normal) |
|---|---|
| μ ± 1σ | 68.27% of values |
| μ ± 2σ | 95.45% of values |
| μ ± 3σ | 99.73% of values |
Frequently Asked Questions
When should I use sample vs population standard deviation?
Use population standard deviation (dividing by N) when you have data for the entire population. Use sample standard deviation (dividing by n-1) when your data is a sample from a larger population. The n-1 denominator (Bessel's correction) provides an unbiased estimate of the population variance.
What is a good standard deviation?
There is no universal good or bad standard deviation. It depends entirely on context. A standard deviation of 5 is tiny for house prices but huge for body temperature in Celsius. The coefficient of variation (CV = SD/mean) is better for comparing variability across different scales.