Table of Contents
What Is Sensitivity?
Sensitivity, also called the true positive rate or recall, measures the proportion of actual positive cases that a diagnostic test correctly identifies. A test with high sensitivity will correctly identify most people who have the disease, minimizing the number of false negatives (missed cases).
Sensitivity is crucial for screening tests where missing a case (false negative) has serious consequences. For example, cancer screening tests prioritize high sensitivity to ensure that as few cancers as possible go undetected. However, high sensitivity often comes at the cost of lower specificity (more false positives).
Formula
Clinical Significance
| Sensitivity | Interpretation | Example |
|---|---|---|
| ≥ 99% | Very high | Blood bank screening |
| 95-99% | High | Cancer screening |
| 80-95% | Moderate | Rapid diagnostic tests |
| < 80% | Low | Some point-of-care tests |
Frequently Asked Questions
What is the difference between sensitivity and specificity?
Sensitivity measures how well a test detects true positives (people with the disease), while specificity measures how well it identifies true negatives (people without the disease). A perfect test has both 100% sensitivity and 100% specificity, but in practice there is usually a trade-off between the two.
When is high sensitivity most important?
High sensitivity is critical when the consequences of missing a diagnosis are severe (e.g., cancer, HIV, meningitis), when the disease is treatable and early detection improves outcomes, and when the test is used for screening in the general population.