Understanding Activation Function with Python


   The activation function is one of the important building blocks of neural networks. Based on input data that can be one or multiple outputs from the previous layer neurons, the activation function decides to activate or not to activate the neuron after summing up the inputs and their weights and adding the bias. This process provides the nonlinearity in network input and output values. In this tutorial, we'll learn how some of the mainly used activation function like sigmoid, tanh, ReLU, and Leaky ReLU and their implementation in Keras with Python. The tutorial covers:
  1. Sigmoid function
  2. Tanh function
  3. ReLU (Rectified Linear Unit) function
  4. Leaky ReLU function
We use the following Python libraries in this tutorial.

import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Activation, Dense, LeakyReLU 

To check the performance of the activation function, we'll use x generated sequence data.

x = np.arange(-5, 5, 0.1)
print(x[1:10])
[-4.9 -4.8 -4.7 -4.6 -4.5 -4.4 -4.3 -4.2 -4.1]


Sigmoid function

Sigmoid function transforms input value to the output between the range from 0 and 1. It is also called a logistic function and the curve of a function looks S-shaped. It is used in cases like making the final decision in the binary classification layer in a network.
Let's define the function in Python.

How to Create ROC Curve in Python


   The ROC stands for Reciever Operating Characteristics, and it is used to evaluate the prediction accuracy of a classifier model. The ROC curve is a graphical plot that describes the trade-off between the sensitivity (true positive rate, TPR) and specificity (false positive rate, FPR) of a prediction in all probability cutoffs (thresholds).
   In this tutorial, we'll learn how to extract ROC data from the binary predicted data and visualize it in a plot with Python. The tutorial covers:
  1. Metrics
  2. Defining the binary classifier
  3. Extract ROC and  AUC
  4. Source code listing 
We'll start by loading the required packages.

from sklearn import metrics
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression


Metrics

  ROC is created by the values TPR and FPR of the classifier. Thus, we need to understand these metrics. The TPR and FPR formulas are mentioned below. Here, TP- True Positive, FP - False Positive, TN -  True Negative, FN - False Negative. The confusion matrics helps you to understand those metrics.

                    TPR = TP / (TP + FN)

                     FPR = FP / (FP + TN)

Understanding Batch Normalization with Keras in Python


   Batch Normalization is a technique to normalize the activation between the layers in neural networks to improve the training speed and accuracy (by regularization) of the model. It is intended to reduce the internal covariate shift for neural networks. The internal covariate shift means that if the first layer changes its parameters based on back-propagation feedback, the second layer also needs to adjust its parameters based on the output of the first layer, and the third layer after the second and so on. Consequent readjustment in network layers destabilizes all the subsequent layers' learning process. This makes the training process slow especially the networks with a large number of layers. Batch Normalization is used to overcome this issue. 
   Batch Normalization works well with image data training and it is widely used in training of Generative Adversarial Networks (GAN) models. 
   In this tutorial, we'll learn how to apply batch normalization in deep learning networks with Keras. The tutorial covers.
  1. Normalization
  2. Preparing the data
  3. Building the model
  4. Comparing the training results
  5. Source code listing
We'll start by loading the required packages.

from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D 
from keras.layers import Dense, Flatten, Dropout
from keras.layers import BatchNormalization
from keras.datasets import mnist
from keras.optimizers import RMSprop
import matplotlib.pyplot as plt


Normalization

Normalization is a method to scale the input data with 0 mean and 1 standard deviation that is all values are distributed between -1 and 1. It converts raw numbers into the distribution values. The below example shows how to normalize the data and its values after normalization.

import sklearn.preprocessing as prep

data =[[10, 321, -22, 3210, 23, -321]]
norm = prep.normalize(data)
print(norm) 
[[ 0.00308441  0.09900951 -0.0067857   0.99009512  0.00709414 -0.09900951]] 

Here, the data values are scaled in a range between -1 and 1. This conversion improves model training speed and the same approach is used in Batch Normalization. In neural networks, ever layer applies a separate normalization layer so that it is called a Batch Normalization.


Understanding Moving Average with R


   The Moving Average (MA) technique calculates the mean value of a given subset by shifting the subset for the entire data series. Moving average is a simple and widely used technique to analyze time-series data in statistics. The technique smooths out the series of data by getting the mean values. In this process,  most of the ups and downs and sharp fluctuations in a series of data can be eliminated and a long term data trend can be created.
   In this tutorial, we'll learn how to calculates MA with the subset values of three and five for a given data in R. The tutorial covers.
  1. Creating sample data
  2. Implementing the MA function
  3. Visualizing in a plot
  4. Source code listing
Let's get started.