Skip to content

Model Evaluation


Precision

  • aka, positive predictive value --
\[\frac{TruePositives}{TruePositives + FalsePositives}\]
  • It depends upon the prevalence, or, the proportion of samples that are, in fact, positive.

Recall (Sensitivity)

  • aka, Sensitivity
\[\frac{TruePositives}{TruePositives + FalseNegatives}\]
  • This is the probability of a positive result given that the instance is, in fact, positive.

F1 Score

  • The harmonic mean of precision and recall, the F-measure:
\[F = 2 \cdot \frac{precision \cdot recall}{precision + recall}\]

  • True positive: Sick people correctly identified as sick
  • False positive: Healthy people incorrectly identified as sick
  • True negative: Healthy people correctly identified as healthy
  • False negative: Sick people incorrectly identified as healthy

Relationship of these metrics


1
2
3
4
5
6
7
8
              precision    recall  f1-score   support

           0       0.85      0.93      0.89      4687
           1       0.63      0.42      0.50      1313

    accuracy                           0.82      6000
   macro avg       0.74      0.67      0.69      6000
weighted avg       0.80      0.82      0.80      6000
  • The precision of class 0 : \(0.85\)
  • The recall (sensitivity) of class 0 : \(0.93\)
  • The false discovery rate of class 0 : \(1 - 0.85 = 0.15\)

  • The precision of class 1, the minority class : \(0.63\)

  • The recall of class 1 : \(0.42\)
  • The false discovery rate of class 1 : \(1 - 0.63 = 0.37\)