Confusion Matrix

Kinder Chen
1 min readSep 21, 2021

--

Confusion matrix shows a way to evaluate the performance of a classifier. The general idea is to count the number of times instances of class A are classified as class B. To compute the confusion matrix, we need to have a set of predictions so that they can be compared to the actual targets.

Each row in a confusion matrix represents an actual class, while each column represents a predicted class. The first row of this matrix considers the negative class: correctly classified instances are called true negatives, while the wrongly classified instances are called false positives. The second row considers the positive class: the wrongly classified instances are called false negatives, while the correctly classified instances are called true positives. A perfect classifier would have only true positives and true negatives, so its confusion matrix would have nonzero values only on its main diagonal.

One axis of the confusion matrix represents the ground-truth value of the items the model made predictions on, while the other axis represents the labels predicted by the classifier. Note that there are also classification tasks that are multi-categorical in nature except binary classification problems.

--

--

Kinder Chen
Kinder Chen

Written by Kinder Chen

What happened couldn’t have happened any other way…

No responses yet