Table of Contents
What Is MCC?
The Matthews Correlation Coefficient (MCC) is a balanced measure for binary classification quality that accounts for all four confusion matrix values: true positives, true negatives, false positives, and false negatives. It produces a value between -1 and +1.
Unlike accuracy, MCC is informative even when classes are imbalanced. A perfect classifier has MCC = +1, random prediction gives MCC = 0, and complete disagreement gives MCC = -1. It is considered one of the best single metrics for classification evaluation.
Formula
Interpretation
| MCC Range | Interpretation |
|---|---|
| +0.7 to +1.0 | Strong positive correlation |
| +0.3 to +0.7 | Moderate positive correlation |
| -0.3 to +0.3 | Weak or no correlation |
| -0.7 to -0.3 | Moderate negative correlation |
| -1.0 to -0.7 | Strong negative correlation |
Frequently Asked Questions
Why is MCC better than accuracy?
Accuracy can be misleading with imbalanced datasets. If 95% of cases are negative, a model that always predicts negative has 95% accuracy but MCC = 0. MCC correctly identifies this as no better than random.
What is the relationship between MCC and F1?
Both are harmonic-type measures, but MCC uses all four confusion matrix quadrants while F1 ignores true negatives. MCC is more informative when both positive and negative predictions matter.
Can MCC be negative?
Yes, a negative MCC indicates the predictions are worse than random (inversely correlated with truth). This means the model is consistently wrong and you could improve it by inverting its predictions.