Precision

Precision is a metric used to evaluate classification models. The precision of a model is the proportion of positive predictions the model got correct out of all the positive predictions the model made.

precision = number of correct positive predictions total number of positive predictions

In other words, precision asks, β€œWhat proportion of the positive predictions was correct?”

Precision ranges from 0 (the worst performance) to 1 (the best performance).

Computing precision in scikit-learn

import numpy as np
from sklearn.metrics import precision_score

y_true = np.array([1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1])
y_pred = np.array([1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1])
precision = precision_score(y_true, y_pred)

print(f"Precision: {precision:.2f}")
Precision: 1.00

This page references the following sources:

Here are all the notes in this garden, along with their links, visualized as a graph.