Sentiment analysis with Bayesian features

Movies are great! Sometimes… But what if we want to find out if one is worth watching? A good start would be to read the reviews of other film enthusiasts on the biggest reviewing platform, IMDB. However, this takes some time… so what about making our computer read the reviews and assess if they are rather positive or negative?

Thanks to the size of this database, this toy problem has been studied a lot, with different algorithms. Aditya Timmaraju and Vikesh Khanna from Stanford University give a really nice overview of the various methods that can be used to tackle this problem, achieving a maximum accuracy of 86.5% with support vector machines. James Hong and Michael Fang used paragraph vectors and recurrent neural networks to classify correctly 94.5% of the reviews. Today, we explore a much simple algorithm, yet very effective, proposed by Sida Wang and Christopher D. Manning: the Naive Bayes Support Vector Machine (NBSVM). We will propose a geometric interpretation of this method, in addition to a Python implementation that yields 91.6% of accuracy on the IMDB dataset in only a few lines of code.

On sentiment analysis

Sentiment analysis, or more generally text classification, is a very hot topic nowadays. Flagging spams, fake news or toxic content on social media is a thankless task for which very sophisticated algorithms have been designed. As often, deep learning seems to achieve state of the art performances when the dataset is large enough. However, there are still some simple and elegant solutions that do not rely on huge corpora and computation power. In this article, we present a very efficient solution to text classification, that nonetheless yields great performances.

Multinomial Naive Bayes classifier

Bayesian classifiers are a very popular and efficient way to tackle text classification problems. With this method, we represent a text by a vector of occurrences, for which each element denotes the number of times a certain word appears in this text. The order of the words in the sentence doesn’t matter, only the number of times each word appears. The Bayes formula gives us the probability that a certain text is a positive review (label ):

We want to find the probability that a given text is a positive review (). Thanks to this formula, we only need to know the probability that this review, knowing that it is positive, was written. (), and the overall probability that a review is positive . Although appears in the formula, it does not really matter for our classification, as we will see.

Probability of a review? The sentence the probability of a review to be written can be confusing when taken out of context. When speaking of randomness, we generally picture a coin flip or a dice roll, and it is fairly intuitive to assign a probability to the outcomes of these actions. But can we do the same thing for a tweet that I’m about to write? The short answer is yes, thanks to statistical language models. These models are just a systematic way of assigning a probability to a sequence of words. Multinomial naive models are probably the simplest ones, but a lot of other more sophisticated methods exists too.

can be easily estimated: it is the frequency of positive reviews in our corpus (noted ). However, is more difficult to estimate, and we need to make some very strong assumptions about it. In fact, we will consider that the appearance of each word of the text is independent of the appearance of the other words. This assumption is very naive, thus illustrating the name of the method.

We now consider that follows a multinomial distribution: for a review of words, what is the probability that these words are distributed as in ? If we denote the probability that a given word appears in a positive review (and that it appears in a negative review), the multinomial distributions assume that is distributed as follows:

Thus, we can predict that the review is positive if , that is if the likelihood ratio is greater than one:

Or, equivalently, if its logarithm is greater than zero:

Which can be written as:

We see that our decision boundary is linear in the log-space of the features.

Alternative representation

However, I like to see this formula as written differently:

is equivalent to

where stands for the element-wise product and for the unitary vector . Now our Bayesian features vector is and our hyperplane is orthogonal to . However we can wonder if this particular hyperplane is the most efficient for classifying the reviews… and the answer is no! Here is our free lunch: we will use support vector machines to find a better separating hyperplane for these Bayesian features.

SVM or logistic regression?

The geometry of support vector machines

A support vector machine tries to find a separation plane that maximises the distance between the plane and the closest points. This distance, called margin, can be expressed in terms of :

support vector machine margin

A point is correctly classified if it is on the good side of the plane, and outside of the margin. On this image, we see that a sample is correctly classified if and or and . This can be summarised as . We want to maximise the margin thus the optimisation problem of a support vector classifier is:

However, if our observations are not linearly separable, such a solution doesn’t exist. Therefore we introduce slack variables that allow our model to incorrectly classify some points at some cost :

Logistic regression

In logistic regression, the probability of a label to be given a vector is:

If we add a l2-regularisation penalty to our regression, the objective function becomes:

Where is the negative log-likelihood of our observations. If you like statistics, it is worth noting that adding the l2-penalty is the same as maximizing the likelihood with a Gaussian prior on the weights (or a Laplacian prior for an l1-penalty).

Why are they similar?

We define the likelihood ratio as

the cost of a positive example for the support vector machine is:

and for the logistic regression with a l2-regularisation penalty:

If we plot the cost of a positive example for the two models, we see that we have very similar losses:

support vector machine against logit loss

This is why a SVC with a linear kernel will give results similar to an l2-penalized linear regression.

In our classification problem, we have 25000 training examples, and more than 130000 features, so a SVC will be extremely long to train. However, a linear classifier with an l2 penalty is much faster than a SVC when the number of samples grows, and gives very similar results, as we just saw.

Dual formulations

When the number of samples is fewer than the number of features, as it is here, one might consider solving the dual formulation of the logistic regression. If you are interested in finding out about this formulation, I recommend Hsiang-Fu Yu, Fang-Lan Huang, and Chih-Jen Lin which makes a nice comparison between the linear SVC and the dual formulation of the logistic regression, uncovering more similarities between these techniques.

Implementation

From reviews to vectors

The original dataset can be found here. However, this script named IMDB.py loads the reviews as a list of strings for both the train and the test sets:

from IMDB import load_reviews

# Load the training and testing sets
train_set, y_train = load_reviews("train")
test_set, y_test = load_reviews("test")

Feel free to use it, it downloads and unzips the database automatically if needed. We will use Scikit.TfidfVectorizer to transform our texts into vectors. Instead of only counting the words, it will return their frequency and apply some very useful transformations, such as giving more weight to uncommon words. The vectorizer I used is a slightly modified version of TfidfVectorizer which a custom pre-processor and tokenizer (which keeps exclamation marks, useful for sentiment analysis). By default, it doesn’t only count words but also bi-grams (pairs of consecutive words), as this gives the best results at the cost of an increasing features space. You can find the code here, and use it to run your own test:

from text_processing import string_to_vec
# Returns a vector that counts the occurrences of each n-gram
my_vectorizer = string_to_vec(train_set, method="Count")
# Returns a vector of the frequency of each n-gram
my_vectorizer = string_to_vec(train_set, method="TF")
# Same but applies an inverse document frequency transformation
my_vectorizer = string_to_vec(train_set, method="TFIDF")

You can tune every parameter of it, just as with a standard TfidfVectorizer. For instance, if you want to keep only individual words and not bi-grams:

# Returns a vector that counts the occurrences of each word
my_vectorizer = string_to_vec(X_train, method="Count", ngram_range = (1))

From now on, we will only use:

myvectorizer = string_to_vec(train_set, method="TFIDF")

This will keep all words and bi-grams that appear more than 5 times in our corpus. This is a lot of words: our features space has 133572 dimensions, for 25000 training points! Now that we know how to transform our reviews to vectors, we need to choose a machine learning algorithm. We talked about support vector machines. However, they scale very poorly and are too slow to be trained on 25000 points with more than 100000 features. We will thus use a slightly modified version, the dual formulation of an l2-penalized logistic regression. We will now explain why this is very similar to a support vector classifier.

The model

As seen before, we define

For some smoothing parameter . The log-ratio is defined as:

Where stands for the norm.

At last, the Bayesian features used to fit our SVC will be

Of course, we will use a sparse matrix to save memory (our vectors are mostly zeros). Wrapped in some python code, this gives:

from __future__ import division
from scipy.sparse import csr_matrix
from sklearn.linear_model import LogisticRegression
import numpy as np

class NBSVM:

    def __init__(self, alpha=1, **kwargs):
        self.alpha = alpha
        # Keep additional keyword arguments to pass to the classifier
        self.kwargs = kwargs

    def fit(self, X, y):
        f_1 = csr_matrix(y).transpose()
        f_0 = csr_matrix(np.subtract(1,y)).transpose() #Invert labels
        # Compute the probability vectors P and Q
        p_ = np.add(self.alpha, X.multiply(f_1).sum(axis=0))
        q_ = np.add(self.alpha, X.multiply(f_0).sum(axis=0))
        # Normalize the vectors
        p_normed = np.divide(p_, float(np.sum(p_)))
        q_normed = np.divide(q_, float(np.sum(q_)))
        # Compute the log-ratio vector R and keep for future uses
        self.r_ = np.log(np.divide(p_normed, q_normed))
        # Compute bayesian features for the train set
        f_bar = X.multiply(self.r_)
        # Fit the regressor
        self.lr_ = LogisticRegression(dual=True, **self.kwargs)
        self.lr_.fit(f_bar, y)

    def predict(self, X):
        return self.lr_.predict(X.multiply(self.r_))

And finally (I chose the parameters and with a cross-validation):

# Transform the training and testing sets
X_train = myvectorizer.transform(train_set)
X_test = myvectorizer.transform(test_set)

clf = NBSVM(alpha=0.1,C=12)
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
print("Accuracy: {}".format(accuracy_score(y_test, predictions)))
Accuracy: 0.91648

That was a pretty painless way of achieving 91.6% accuracy!

One last word: Natural language processing is a hot topic, and transfer learning is becoming a viable solution. I’m not expecting simple models such as this one to perform better than deep models forever, even on small datasets. So keep this algorithm in mind as a strong baseline, but don’t forget to keep up with the literature and try other things!

Thank you a lot for reading, and don’t hesitate to leave a comment if you have any question or suggestion ;)

Updated:

Leave a Comment