Table of Contents
What Is Youden's Index?
Youden's Index (J statistic), developed by W.J. Youden in 1950, is a single summary measure of a diagnostic test's performance that captures both sensitivity and specificity in one number. It ranges from 0 (useless test that performs no better than chance) to 1 (perfect test with both 100% sensitivity and 100% specificity).
Youden's Index is particularly useful for comparing diagnostic tests and for determining the optimal cutoff point on a ROC curve. The optimal cutoff maximizes J, balancing the trade-off between sensitivity and specificity. It is equivalent to the maximum vertical distance between the ROC curve and the diagonal line of no discrimination.
Formula
Interpretation
| J Value | Test Quality |
|---|---|
| J = 0 | Useless (no better than chance) |
| 0 < J ≤ 0.3 | Poor |
| 0.3 < J ≤ 0.6 | Moderate |
| 0.6 < J ≤ 0.8 | Good |
| 0.8 < J ≤ 1.0 | Excellent |
Frequently Asked Questions
Why not just use accuracy?
Accuracy can be misleading with imbalanced classes. If 95% of patients are healthy, a test that always says "negative" has 95% accuracy but zero Youden's Index. Youden's Index requires both sensitivity AND specificity to be high, making it a more balanced measure of test performance.
How is Youden's Index used to find the optimal cutoff?
For a diagnostic test with continuous results, different cutoff values produce different sensitivity-specificity pairs. The optimal cutoff is the one that maximizes Youden's Index. This is found by computing J for each possible cutoff and selecting the maximum. On a ROC curve, this corresponds to the point farthest from the diagonal.