Accuracy
Accuracy is a metric used to evaluate classification models. The accuracy of a model is the proportion of predictions the model got correct out of all the predictions the model made.
In other words, accuracy asks, βWhat proportion of the predictions was correct?β
Accuracy ranges from 0 (the worst performance) to 1 (the best performance).
Computing accuracy in scikit-learn
import numpy as np
from sklearn.metrics import accuracy_score
y_true = np.array([1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1])
y_pred = np.array([1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1])
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy:.2f}")
Accuracy: 0.90
This page references the following sources: