Precision
Precision is a metric used to evaluate classification models. The precision of a model is the proportion of positive predictions the model got correct out of all the positive predictions the model made.
In other words, precision asks, βWhat proportion of the positive predictions was correct?β
Precision ranges from 0 (the worst performance) to 1 (the best performance).
Computing precision in scikit-learn
import numpy as np
from sklearn.metrics import precision_score
y_true = np.array([1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1])
y_pred = np.array([1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1])
precision = precision_score(y_true, y_pred)
print(f"Precision: {precision:.2f}")
Precision: 1.00
This page references the following sources: