Matthews Correlation Coefficient (MCC) Calculator

Calculate the MCC for binary classification evaluation. Enter the confusion matrix values (TP, TN, FP, FN) to assess model performance.

MCC
--
Accuracy
--
Precision
--
Recall
--
F1 Score
--

What Is MCC?

The Matthews Correlation Coefficient (MCC) is a balanced measure for binary classification quality that accounts for all four confusion matrix values: true positives, true negatives, false positives, and false negatives. It produces a value between -1 and +1.

Unlike accuracy, MCC is informative even when classes are imbalanced. A perfect classifier has MCC = +1, random prediction gives MCC = 0, and complete disagreement gives MCC = -1. It is considered one of the best single metrics for classification evaluation.

Formula

MCC = (TP×TN - FP×FN) / √[(TP+FP)(TP+FN)(TN+FP)(TN+FN)]

Interpretation

MCC RangeInterpretation
+0.7 to +1.0Strong positive correlation
+0.3 to +0.7Moderate positive correlation
-0.3 to +0.3Weak or no correlation
-0.7 to -0.3Moderate negative correlation
-1.0 to -0.7Strong negative correlation

Frequently Asked Questions

Why is MCC better than accuracy?

Accuracy can be misleading with imbalanced datasets. If 95% of cases are negative, a model that always predicts negative has 95% accuracy but MCC = 0. MCC correctly identifies this as no better than random.

What is the relationship between MCC and F1?

Both are harmonic-type measures, but MCC uses all four confusion matrix quadrants while F1 ignores true negatives. MCC is more informative when both positive and negative predictions matter.

Can MCC be negative?

Yes, a negative MCC indicates the predictions are worse than random (inversely correlated with truth). This means the model is consistently wrong and you could improve it by inverting its predictions.