Supervised machine learning has two types: 1) Regression, 2) Classification
In regression, we already saw that it deals with the continuous data type that means the predicted value or target value will be the type of continuous in nature but in classification problems, we basically deal with target value as discrete in nature it belongs to some class.
Let’s understand some use cases first then will proceed further. Here I am listing some domain and their respective question which people deals on the regular basis while working on machine learning algorithms.
[table id=3 /]
The answer to these questions is a discrete class the number of label or class can vary from a minimum of two for example either yes or no, true or false to multilabel for example whether a machine learning news related to finance domain can cover in different categories like it could be in technology, AI, ML and Finance actually it depends upon categories which we have we can tag a news in multiple categories and this is one of the examples of multilabel classification.
precisely we can say that classification is a machine learning techniques where we categorize the data based on a given number of classes. Classification algorithms identify the classes to which new data will fall under.
Terminologies of classification algorithms:
- Binary class classification: This types of classification deals with two possible predictions either true or false, male or female, yes or no.
- Multi-class classification: This types of classification deal with more than one than two classes but the predicted value or outcome will assign in single classes from these classes. For example, a person can be classified based on 10 given regions will belong to only one region out of 10 given regions.
- Multilabel Classifications: This types of classifications deals with the mapped output value will belong to more than one class. For example, the news articles of sports could belong to the class player, location, and game etc.
- Classifier: This is an algorithm which maps input features to a particular class.
Steps for classification algorithms
Since classification is a type of supervised machine learning so the procedure will same as well as we know about machine learning does.
- Data Preparation: In this phase, we deal with the data loading, exploratory data analysis etc.
- Initialize: Initialize the classifier which is being used.
- Train the classifier: We fit the training dataset with exploratory variable and target variable for example if we use Scikit learn, we use .fit(X_train,y_train) where X is the combination of explanatory variables and y is target variable or label.
- Predict the target value using test data: In this step, we use the outcomes of test data or unseen data For example in Scikit learn we use .predict(X_test).
- Evaluate: Evaluate the classifier model.
Different Classification Algorithms
- Logistic Regression:
- Logistic regression uses a logistic function which predicts the probability that certain samples belong to the particular class. Sometimes it also called Sigmoid function due to it’s S character shape.
- Naive Bayes Classifier
- Naive Bayes algorithm is basically based on Bayes’ Theorem. it assumes all features are independent of each other it is very useful in documents classification and spam filtering etc.
- K-nearest Neighbors
- KNN is also known as a lazy learner, To make the prediction of new data point the algorithm finds the closest data point in the training datasets.
- Decision Tree
- Decision tree basically made a tree likes structure of training datasets based on asking some series of question. Decision tree uses both for classification and regression.
- Random Forest
- Random forest is a type ensemble learning. The objective of ensemble learning is to combine weak learner to build more robust and strong learner. So we can say that Random forest is the collections of trees.
- Neural Network
- A neural network consists of units (neurons), arranged in layers, which convert an input vector into some output. Each unit takes an input, applies an (often nonlinear) function to it and then passes the output on to the next layer
- Stochastic Gradient Descent
- It is very useful when large data sets we have and very efficient to fit linear model. It supports different loss functions.
- Support vector machine
- The main intention behind support vector machine is to maximize the margin between classes.The margin defined as the separating hyperplanes and the data points that are very close to the hyperplane called support vectors.
Classification Performance matrices
- Accuracy: Accuracy defines “What % of prediction was correct?” (TP+TN)/TP+TN+FP+FN , where TP= True Positive, TN= True Negative, FP= False Positive, FN= False Negative.
- True positive rate or Recall: Recall defines ” What % of positive case model did catch?” TP/(FN+TP).
- Precision(exactness): Precision defines “What % of positive prediction was correct?” TP/(TP+FP).
- F1 Score: This defines weighted average of precision and recall (2 x Precision x Recall)/(Precision+Recall).
This article is all about classification algorithms in machine learning. Before starting this article my motive was to present a basic idea about classification algorithm. I will cover all the algorithms with details in upcoming articles.
If you liked this article be sure to like this article and you have any questions related to this answer I will do my best to answer.