Imbalance data distribution is an important part of machine learning workflow. 

SwissCognitiveImbalance data distribution is an important part of machine learning workflow. An imbalanced dataset means instances of one of the two classes is higher than the other, in another way, the number of observations is not the same for all the classes in a classification dataset. This problem is faced not only in the binary class data but also in the multi-class data.

In this article, we list some important techniques that will help you to deal with your imbalanced data .

1| Oversampling

This technique is used to modify the unequal data classes to create balanced datasets. When the quantity of data is insufficient, the oversampling method tries to balance by incrementing the size of rare samples.

A primary technique used in oversampling is SMOTE (Synthetic Minority Over-sampling TEchnique). In this technique, the minority class is over-sampled by producing synthetic examples rather than by over-sampling with replacement and for each minority class observation, it calculates the k nearest neighbours (k-NN). But this technique is limited to an assumption that local space between any two positive instances belongs to the minority class, which may not always true in the case when the training data is not linearly separable. Depending upon the amount of oversampling required, neighbours from k-NN are randomly chosen. No loss of information

2| Undersampling

Unlike oversampling, this technique balances the imbalance dataset by reducing the size of the class which is in abundance. There are various methods for classification problems such as cluster centroids and Tomek links. The cluster centroid methods replace the cluster of samples by the cluster centroid of a K-means algorithm and the Tomek link method removes unwanted overlap between classes until all minimally distanced nearest neighbours are of the same class.

Run-time can be improved by decreasing the amount of training dataset.

Helps in solving the memory problems


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

3| Cost-Sensitive Learning Technique

The Cost-Sensitive Learning (CSL) takes the misclassification costs into consideration by minimising the total cost. The goal of this technique is mainly to pursue a high accuracy of classifying examples into a set of known classes. It is playing as one of the important roles in the machine learning algorithms including the real-world data mining applications.

In this technique, the costs of false positive(FP), false negative (FN), true positive (TP), and true negative (TN) can be represented in a cost matrix as shown below where C(i,j) represents the misclassification cost of classifying an instance and also “i” the predicted class and “j” is the actual class. Here is an example of cost matrix for binary classification.[…]

read more – copyright by www.analyticsindiamag.com