SGD Classification Example with SGDClassifier in Python

     Applying the Stochastic Gradient Descent (SGD) to the regularized linear methods can help building an estimator for classification and regression problems.

    Scikit-learn API provides the SGDClassifier class to implement SGD method for classification problems. The SGDClassifier applies regularized linear model with SGD learning to build an estimator. The SGD classifier works well with large-scale datasets and it is an efficient and easy to implement method.

       In this tutorial, we'll briefly learn how to classify data by using the SGDClassifier class in Python. The tutorial covers:

  1. Preparing the data
  2. Training the model
  3. Predicting and accuracy check
  4. Iris dataset classification example
  5. Source code listing
   We'll start by loading the required libraries and functions.

from sklearn.linear_model import SGDClassifier
from sklearn.datasets import load_iris
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.preprocessing import scale


Preparing the data

    First, we'll generate random classification dataset with make_classification() function. The dataset contains 3 classes with 10 features and the number of samples is 5000.

x, y = make_classification(n_samples=5000, n_features=10, 
                           n_classes=3, 
                           n_clusters_per_class=1)

Then, we'll split the data into train and test parts. Here, we'll extract 15 percent of it as test data.

xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size = 0.15)


Training the model

     Next, we'll define the classifier by using the SGDClassifier class. Then fit it on the train data. 

sgdc = SGDClassifier(max_iter=1000, tol=0.01)
print(sgdc)
 
sgdc.fit(xtrain, ytrain)
   

After the training the classifier, we'll check the model accuracy score.

score = sgdc.score(xtrain, ytrain)
print("Training score: ", score) 
 
Training Score:  0.8454117647058823


Predicting and accuracy check

     Now, we can predict the test data by using the trained model. After the prediction, we'll check the accuracy level by using the confusion matrix function.

ypred = sgdc.predict(xtest)

cm = confusion_matrix(ytest, ypred)
print(cm) 
 
[[215   6  30]
[ 8 236 4]
[ 54 21 176]]
 
 
We can also create a classification report by using classification_report() function on predicted data to check the other accuracy metrics.

cr = classification_report(ytest, ypred)
print(cr)

              precision    recall  f1-score   support

0 0.78 0.86 0.81 251
1 0.90 0.95 0.92 248
2 0.84 0.70 0.76 251

accuracy 0.84 750
macro avg 0.84 0.84 0.83 750
weighted avg 0.84 0.84 0.83 750



Iris dataset classification example

    We'll load the Iris dataset with load_iris() function, extract the x and y parts, then split into the train and test parts. It is better to scale data to improve the training accuracy.

# Iris dataset example 
 
iris = load_iris() x, y = iris.data, iris.target
x = scale(x)
xtrain, xtest, ytrain, ytest=train_test_split(x, y, test_size=0.15)
 

Then, we'll use the same method mentioned above.

sgdc = SGDClassifier(max_iter=1000, tol=0.01)
print(sgdc)

sgdc.fit(xtrain, ytrain)
score = sgdc.score(xtrain, ytrain)
print("Score: ", score)

ypred = sgdc.predict(xtest)

cm = confusion_matrix(ytest, ypred)
print(cm)

cr = classification_report(ytest, ypred)
print(cr) 

SGDClassifier(tol=0.01)
Score: 0.9606299212598425
[[7 0 0]
[0 5 2]
[0 2 7]]
precision recall f1-score support

0 1.00 1.00 1.00 7
1 0.71 0.71 0.71 7
2 0.78 0.78 0.78 9

accuracy 0.83 23
macro avg 0.83 0.83 0.83 23
weighted avg 0.83 0.83 0.83 23
 

    In this tutorial, we've briefly learned how to classify data by using Scikit-learn's SGDClassifier class in Python. The full source code is listed below.


Source code listing

from sklearn.linear_model import SGDClassifier
from sklearn.datasets import load_iris
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.preprocessing import scale

x, y = make_classification(n_samples=5000, n_features=10, 
                           n_classes=3, n_clusters_per_class=1)

xtrain, xtest, ytrain, ytest=train_test_split(x, y, test_size=0.15)

sgdc = SGDClassifier(max_iter=1000, tol=0.01)
print(sgdc)

sgdc.fit(xtrain, ytrain)

score = sgdc.score(xtrain, ytrain)
print("Training score: ", score)

ypred = sgdc.predict(xtest)
cm = confusion_matrix(ytest, ypred)
print(cm)

cr = classification_report(ytest, ypred)
print(cr)


# Iris dataset example
iris = load_iris()
x, y = iris.data, iris.target
x = scale(x)

xtrain, xtest, ytrain, ytest=train_test_split(x, y, test_size=0.15)

sgdc = SGDClassifier(max_iter=1000, tol=0.01)
print(sgdc)

sgdc.fit(xtrain, ytrain)
score = sgdc.score(xtrain, ytrain)
print("Score: ", score)

ypred = sgdc.predict(xtest)

cm = confusion_matrix(ytest, ypred)
print(cm)

cr = classification_report(ytest, ypred)
print(cr) 


References:

No comments:

Post a Comment