Table of Contents
What Is a T-Test?
A t-test is a statistical hypothesis test used to determine whether there is a significant difference between the means of two groups. It is one of the most commonly used inferential statistical tests in research. The two-sample (independent) t-test compares the means of two separate groups to assess whether they come from populations with equal means.
The test produces a t-statistic and an associated p-value. If the p-value is below the chosen significance level (commonly 0.05), the null hypothesis of equal means is rejected, indicating a statistically significant difference between the groups.
Formula (Welch's T-Test)
Types of T-Tests
| Type | Use Case | Assumption |
|---|---|---|
| One-sample | Compare sample mean to known value | Normal distribution |
| Independent two-sample | Compare means of two groups | Independent observations |
| Paired | Compare means of matched pairs | Paired observations |
| Welch's | Two groups with unequal variances | No equal variance assumption |
Frequently Asked Questions
What assumptions does the t-test require?
The t-test assumes: (1) data is continuous, (2) observations are independent, (3) data is approximately normally distributed (less important with larger samples due to CLT), and (4) for Student's t-test, equal variances between groups (Welch's t-test relaxes this assumption).
What does p-value mean in a t-test?
The p-value is the probability of observing a test statistic as extreme as the calculated value, assuming the null hypothesis is true. A p-value of 0.03 means there is a 3% chance of seeing such a large difference if the true means were equal. It does not indicate the probability that the null hypothesis is true.