Precision, Recall, and F Score Concepts  in Details

Precision, Recall, and F Score Concepts in Details


There are different evaluation matrices that can help with these types of datasets. Those evaluation metrics are called precision-recall evaluation metrics.

Image for post


Precision calculates, what fraction of the transactions we predicted as fraudulent(predicted class 1) are actually fraudulent. Precision can be calculated using the following formula:

Image for post
Image for post


Recall tells us, what fraction of all the transactions that are originally fraudulent are detected as fraudulent. That means when a transaction is actually fraudulent if we told the proper authority of the bank to take action. When I first read these definitions of precision and recall, it took me some time to really understand the difference. I hope you are getting it faster. If not, then don’t worry. You are not alone.

Image for post
Image for post

Making Decisions From Precision And Recall

The precision and recall give a better sense of how an algorithm is actually doing, especially when we have a highly skewed dataset. If we predict 0 all the time and get 99.5% accuracy, the recall and precision both will be 0. Because there are no true positives. So, you know that classifier is not a good classifier. When the precision and recall both are high, that is an indication that the algorithm is doing very well.

Image for post

F1 Score

F1 score is the average of precision and recall. But the formula for average is different. The regular average formula does not work here. Look at the average formula:

Image for post


In this article, you learned how to deal with a skewed dataset. How to choose between precision and recall using an F1 score. I hope it was helpful.


#MachineLearning #DataScience #ArtificialInteligence #Technology

Leave a Reply

Close Menu