What is OvR (One versus the rest) and OvO (One versus One) strategy in Machine Learning?

Spread the love

OVR (One versus the rest) strategy –

Suppose we want to classify the digit images into 10 classes (from 0 to 9). We can do this by training 10 binary classifiers, one for each digit (a 0-detector, a 1-detector, a 2-detector and so on). Then when you want to classify an image, you get the decision score from each classifier for that image and you select the class whose classifier outputs the highest score. This is called the one versus the rest (OVR) strategy (also called one versus all).

OvO (One Versus One) Strategy –

In one versus one, we train a binary classifier for every pair of digits. One to distinguish 0s and 1s, another to distinguish 0s and 2s, another for 1s and 2s and so on. This is called the one versus one (OvO) strategy. If there are N classes, you need to train N * (N-1) / 2 classifiers. For the MNIST problem, this means training 45 binary classifiers. When you want to classify an image, you have to run the image through all 45 classifiers and see which class win the most duels. The main advantage of OvO is that each classifier only needs to be trained on the part of the training set for the two classes that it must distinguish.

Some algorithms (such as support vector Machine Classifiers) scale poorly with the size of the training set. For these algorithms OvO is preferred because it is faster to train many classifiers on small training set than to train few classifiers on large training set. For most binary classification however OVR is preferred.

Rating: 1 out of 5.

Leave a Reply