Table of Contents
What Is Specificity?
Specificity, also called the true negative rate, measures the proportion of actual negative cases that a diagnostic test correctly identifies as negative. A test with high specificity rarely produces false positive results, meaning that when the test is positive, you can be more confident the person truly has the condition.
High specificity is essential for confirmatory tests where a false positive has serious consequences, such as unnecessary surgery, psychological distress, or costly follow-up procedures. For example, a biopsy confirmation test should have very high specificity to avoid unnecessary treatment of healthy individuals.
Formula
Clinical Applications
| Specificity | Quality | Example Use |
|---|---|---|
| ≥ 99% | Very high | Confirmatory HIV test |
| 95-99% | High | Diagnostic imaging |
| 85-95% | Moderate | General screening |
| < 85% | Low | Preliminary screening |
Frequently Asked Questions
Why is specificity important for confirmatory tests?
Confirmatory tests are used after a positive screening result to rule in a diagnosis. High specificity ensures that very few healthy people test positive, so a positive confirmatory result strongly suggests disease. This is captured by the concept that a highly specific test, when positive, rules in the disease (SpPIn mnemonic).
Can a test have both high sensitivity and high specificity?
Yes, but there is usually a trade-off. Adjusting the cutoff threshold of a continuous test increases sensitivity at the expense of specificity, and vice versa. The ROC curve visualizes this trade-off and the area under the ROC curve (AUC) measures overall test discrimination ability.