• Sign up
  • Log in
ABM – Machine Learning Platform for everyoneABM – Machine Learning Platform for everyoneABM – Machine Learning Platform for everyoneABM – Machine Learning Platform for everyone
  • How it works
  • Pricing
  • Developers
  • Blog
NextPrevious

How to assess quality and correctness of classification models? Part 4 – ROC Curve

By Algolytics | Predictive models | Comments are Closed | 30 June, 2015 | 0

In the previous parts of our tutorial we discussed:

  • Basic notation used in assessing classification models
  • Quantitative quality indicators
  • Confusion Matrix

In this fourth part of the tutorial we will discuss the ROC curve.

What is the ROC curve?

The ROC curve is one of the methods for visualizing classification quality, which shows the dependency between TPR (True Positive Rate) and FPR (False Positive Rate).

roc1_en

The more convex the curve, the better the classifier. In the example below, the “green” classifier is better in area 1, and the “red” classifier is better in area 2.

roc2

How is the ROC curve created

  1. We compute the values of the decision function.
  2. We test the classifier for different alpha thresholds. Recall that alpha is the threshold of the estimated probability, above which an observation is assigned to one category (positive class) and below to the other category (negative class).
  3. For each classification with one value of the alpha threshold we obtain a (TPR, FPR) pair, which corresponds to one point on the ROC curve.
  4. For each classification with one value of the alpha threshold we also have the corresponding Confusion Matrix.

Example:

roc3_en

roc4_en

Assessing the classifier on the basis of the ROC curve

roc5

The quality of classification can be determined using the ROC curve by calculating the:

  • area under ROC Curve (AUC) coefficient

The higher the value of AUC coefficient, the better. AUC = 1 means a perfect classifier, AUC = 0.5 is obtained for purely random classifiers. AUC < 0.5 means the classifier performs worse than a random one.

  • Gini Coefficient: GC = 2 *AUC – 1 (the classifier’s advantage over a purely random one)

The higher the value of GC, the better. GC = 1 denotes a perfect classifier, GC = 0 denotes a purely random one.

The last part of our tutorial will be dedicated to LIFT curve.

Want to read more news like this? Sign up for our Newsletter!

NAME

EMAIL

I agree to the processing of my personal data for the purpose of sending marketing information.

The administrator of the data given in the above form is Algolytics Technologies Sp. z o. o., ul. Przeskok 2, 00-032 Warszawa, NIP: 701-080-13-66, Regon: 369456263, District Court for the Capital City of Warsaw in Warsaw, XII Commercial Division of the National registered under KRS number 0000074723, Amount of the share capital: 321 300,00 PLN. Data is provided voluntarily and processed in order to respond to enquiries made using the form and to send marketing information. We would like to inform you about your right to be forgetten, your right to access the data and your right to correct it. Please note that your consent may be revoked at any time by sending an e-mail to gdpr@algolytics.pl from the address to which consent relates.

I accept Terms of service and Privacy policy

Read Terms of service

Read Privacy Policy

Share
Share
Tweet
0 Shares
Predictive models

Algolytics

More posts by Algolytics

Related Post

  • correlation_causation_example1

    Correlation does not imply causation

    By Algolytics | Comments are Closed

    A popular phrase tossed around when we talk about statistical data is “there is correlation between variables”. However, many people wrongly consider this to be the equivalent of “there is causation between variables”. It’s importantRead more

  • Understanding machine learning #3: Confusion matrix – not all errors are equal

    By Algolytics | Comments are Closed

    One of the most typical tasks in machine learning is classification tasks. It may seem that evaluating the effectiveness of such a model is easy. Let’s assume that we have a model which, based onRead more

  • Understanding machine learning #2: Do we need machine learning at all?

    By Algolytics | Comments are Closed

    In the previous post of our Understanding machine learning series, we presented how machines learn through multiple experiences. We also explained how, in some cases, human beings are much better at interpreting data than machines.Read more

  • Understanding Machine Learning #1 – How machines learn?

    By Algolytics | Comments are Closed

    “If (there) was one thing all people took for granted, (it) was conviction that if you feed honest figures into a computer, honest figures (will) come out. Never doubted it myself till I met aRead more

  • Tutorial: How to establish quality and correctness of classification models? Part 3 – Confusion Matrix

    By Algolytics | Comments are Closed

    In the previous parts of the tutorial (part 1, part 2) we introduced quantitative indicators of classification model quality. In the next two parts we will take a closer look at a couple of graphicalRead more

NextPrevious

100px white

Created with love by Algolytics

+48 691 303 305
abm_support (at) algolytics.com

Company

  • About us
  • Blog
  • Contact Us

Product

  • Documentation
  • Pricing
  • Terms of service
  • Privacy policy
  • API
Copyright © 2020 Algolytics Technologies | All Rights Reserved
  • How it works
  • Pricing
  • Developers
  • Blog
ABM – Machine Learning Platform for everyone
This site uses cookies: Find out more.