[Pitfall #1] Deceptive model evaluation

Predictive models need to be evaluated, i.e., their performance needs to be quantified with an appropriate evaluation metric to get a realistic estimate of how useful a model will be in practice and how many mistakes we have to expect from it.

Since in supervised learning problems we know the ground truth, we can objectively evaluate different models and benchmark them against each other.

Is this a good model for the task?
I.e., does the model generate reliable predictions for new data points?

  • Split the data into training and test sets to be able to get a reliable estimate of how the model will later perform when applied to new data points that it wasn’t trained on.

  • Quantify the quality of the model’s predictions on the test set with a suitable evaluation metric (depending on the problem type).

    ⇒ Are some mistakes worse than others (e.g., consider false positives vs. false negatives in medical tests)?
    ⇒ Always choose a single metric/KPI to optimize (maybe: additional constraints like runtime).

However, it can be quite easy to paint an overly optimistic picture here, therefore we always need to be critical and, for example, compare the performance of a model to that of a baseline. The simplest comparison would be to a “stupid” model that only predicts the mean (→ regression) or most frequent class (→ classification).

Evaluation metrics in the face of unbalanced class distributions

The accuracy is a very commonly used evaluation metric for classification problems:
Accuracy: Fraction of samples that were classified correctly.

Below we see the decision boundaries of two models on a toy dataset, where the background color indicates whether the model predicts the blue or red class for a data point in this area. Which model do you think is more useful?

image

With unbalanced class distributions, e.g., in this case a lot more samples from the blue compared to the red class, the accuracy of a model that simply always predicts the most frequent class can be quite large. But while a 90% accuracy might sound impressive when we report the performance of a model to the project’s stakeholders, this does not necessarily mean that the model is actually useful, especially since in real world problems the undersampled class is often the one we care about most, e.g., people with a rare disease or products that have a defect.

Instead, the balanced accuracy is often the more informative measure when evaluating classification models and can help us to distinguish between a model that has actually learned something and the ‘stupid baseline’:
Balanced Accuracy: First the fraction of correctly classified samples is computed for each class individually and then these values are averaged.

image