Confusion Matrix:
Predicted Positive | Predicted Negative | |
---|---|---|
Actual Positive | TP | FN |
Actual Negative | FP | TN |
From: | To: |
A confusion matrix is a table that summarizes the performance of a classification algorithm by comparing predicted classes against actual classes. It's a fundamental tool for evaluating machine learning models in binary classification problems.
The confusion matrix is constructed by counting:
Where:
Accuracy: Overall correctness of the model (TP+TN)/(TP+FP+FN+TN)
Precision: When it predicts positive, how often is it correct? TP/(TP+FP)
Recall (Sensitivity): How many actual positives were correctly predicted? TP/(TP+FN)
F1 Score: Harmonic mean of precision and recall (2*Precision*Recall)/(Precision+Recall)
Instructions: Enter the counts for each category (TP, FP, FN, TN) from your classification results. The calculator will compute all relevant performance metrics.
Q1: What's better - high precision or high recall?
A: Depends on your use case. For spam detection, you want high precision. For disease screening, you want high recall.
Q2: What does an F1 score of 1 mean?
A: A perfect score of 1 means both precision and recall are 1 (perfect classification).
Q3: Can I use this for multi-class problems?
A: This calculator is for binary classification. For multi-class, you'd need one matrix per class.
Q4: What if my confusion matrix has zeros?
A: Some metrics become undefined (like precision when TP+FP=0). The calculator handles these cases.
Q5: How do I get these values from my model?
A: Most ML libraries (scikit-learn, TensorFlow) provide functions to compute the confusion matrix.