Accuracy

Accuracy is a metric used to evaluate classification models. The accuracy of a model is the proportion of predictions the model got correct out of all the predictions the model made.

accuracy = number of correct predictions total number of predictions

In other words, accuracy asks, β€œWhat proportion of the predictions was correct?”

Accuracy ranges from 0 (the worst performance) to 1 (the best performance).

Computing accuracy in scikit-learn

import numpy as np
from sklearn.metrics import accuracy_score

y_true = np.array([1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1])
y_pred = np.array([1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1])
accuracy = accuracy_score(y_true, y_pred)

print(f"Accuracy: {accuracy:.2f}")
Accuracy: 0.90

This page references the following sources:

Here are all the notes in this garden, along with their links, visualized as a graph.