The ROC Curve
The receiver operating characteristic (ROC) curve is a common tool used with binary classifiers. The ROC curve plots the TPR, i.e. true positive rate (or recall) against the false positive rate (FPR). The FPR is the ratio of negative instances that are incorrectly classified as positive. It is equal to 1-TNR (true negative rate), which is the ratio of negative instances that are correctly classified as negative. The TNR is also called specificity. Hence, the ROC curve plots sensitivity versus 1-specificity.
There is a trade-off: the higher the recall (TPR), the more false positives (FPR) the classifier produces. One way to compare classifiers is to measure the area under the curve (AUC). A perfect classifier will have a ROC AUC equal to 1, whereas a purely random classifier will have a ROC AUC equal to 0.5.
There are two curves: the ROC curve and the precision/recall (PR) curve to evaluate the results. As a rule of thumb, we should prefer the PR curve whenever the positive class is rare or when we care more about the false positives than the false negatives. Otherwise, use the ROC curve.