Sentiment Analysis with BERT in PyTorch

      Sentiment analysis involves determining the sentiment (positive, negative, or neutral) expressed in a piece of text, making it a valuable tool for understanding user opinions, customer feedback, and social media sentiment. In this tutorial, we'll explore how to perform sentiment analysis using BERT (Bidirectional Encoder Representations from Transformers), one of the most powerful models in NLP.

  1. Loading the Pre-trained BERT Model
  2. Performing Sentiment Analysis
  3. Conclusion 
  4. Source code listing

     Let's get started.

 

Loading the Pre-trained BERT Model

    First, ensure you have the necessary libraries installed. We'll be using the transformers library from Hugging Face and PyTorch for this tutorial. You can install them by using the following command:


pip install transformers torch

    We'll begin by importing necessary modules and classes from the transformers and torch libraries. Specifically, we'll import BertTokenizer and BertForSequenceClassification from transformers, as well as softmax from torch.nn.functional and the torch module itself. 

 
from transformers import BertTokenizer, BertForSequenceClassification
from torch.nn.functional import softmax
import torch
 

    Next, we load the pre-trained model and tokenizer. We define the name of the pre-trained BERT model we want to use ("bert-base-uncased"). Then, we instantiate a BertTokenizer object and a BertForSequenceClassification model object using the from_pretrained() method, which loads the pre-trained weights and configurations for the specified model name.

 
# Load pre-trained BERT model and tokenizer
model_name = "bert-base-uncased"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name)

 

Performing Sentiment Analysis

    We define sample text for sentiment analysis. You can replace this text with any other text you want to analyze sentiment for. Then, we apply tokenization to it using the encode_plus() method of the tokenizer object. This method tokenizes and encodes the input text, converting it into a sequence of tokens suitable for input to the model. We specify parameters such as max_length, truncation, padding, and return_tensors to control the tokenization process.


# Example text for sentiment analysis
text = "I really enjoyed watching that movie. It was fantastic!"

# Tokenize and encode the text
tokens = tokenizer.encode_plus(text, max_length=128, truncation=True
                                padding=True, return_tensors="pt")

    Next, we use the model object to make predictions on the tokenized input text. The **tokens syntax passes the tokenized input to the model. We use torch.no_grad() to disable gradient tracking since we're only interested in inference, not training.


# Get model prediction
with torch.no_grad():
logits = model(**tokens).logits

    We apply softmax to obtain probabilities. The softmax() function is applied to the logits to convert the raw model outputs into probabilities, ensuring that the output values sum up to 1. To get the predicted class, we use torch.argmax() to find the index of the maximum probability, which corresponds to the predicted class. The .item() method is then used to extract the integer value of the index.

 
# Apply softmax to obtain probabilities
probs = softmax(logits, dim=1)

# Get the predicted class
predicted_class = torch.argmax(probs).item()
 

    We define a list of class labels (negative and positive) and use the predicted class index to map to the corresponding label. Finally, we print the original text, the predicted sentiment label, and the class probabilities. We use dictionary mapping class labels to probabilities for better readability.

 
# Map the predicted class index to a label (e.g., positive, negative)
class_labels = ['negative', 'positive']
predicted_label = class_labels[predicted_class]

# Print the results
print("Original Text:", text)
print("Predicted Label:", predicted_label)
print("Class Probabilities:", dict(zip(class_labels, probs.numpy()[0])))

The result looks as follows.


Original Text: I really enjoyed watching that movie. It was fantastic!
Predicted Label: positive
Class Probabilities: {'negative': 0.4436438, 'positive': 0.55635625}

 

 

Conclusion
 
    In this tutorial, we performed sentiment analysis using BERT. By leveraging pre-trained models like BERT, you can quickly and accurately analyze the sentiment of text data, opening up a wide range of applications in understanding user sentiment, customer feedback, and social media analysis. The full source code is listed below.
 
 
Source code listing

 
from transformers import BertTokenizer, BertForSequenceClassification
from torch.nn.functional import softmax
import torch

# Load pre-trained BERT model and tokenizer
model_name = "bert-base-uncased"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name)

# Example text for sentiment analysis
text = "I really enjoyed watching that movie. It was fantastic!"

# Tokenize and encode the text
tokens = tokenizer.encode_plus(text, max_length=128
truncation=True, padding=True, return_tensors="pt")

# Get model prediction
with torch.no_grad():
logits = model(**tokens).logits

# Apply softmax to obtain probabilities
probs = softmax(logits, dim=1)

# Get the predicted class
predicted_class = torch.argmax(probs).item()

# Map the predicted class index to a label (e.g., positive, negative)
class_labels = ['negative', 'positive']
predicted_label = class_labels[predicted_class]

# Print the results
print("Original Text:", text)
print("Predicted Label:", predicted_label)
print("Class Probabilities:", dict(zip(class_labels, probs.numpy()[0])))

 
 
References:
  1. BERT https://huggingface.co/docs/transformers/model_doc/bert

No comments:

Post a Comment