What is Glove?

GloVe is an unsupervised learning algorithm for obtaining vector representations for words. Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase interesting linear substructures of the word vector space.

GloVe is an approach to marry both the global statistics of matrix factorization techniques like LSA with the local context-based learning in word2vec.

Rather than using a window to define local context, GloVe constructs an explicit word-context or word co-occurrence matrix using statistics across the whole text corpus. The result is a learning model that may result in generally better word embeddings.

  • What is Local and Global Statistics?
    • Local statistics are the statistics that are computed within a window of words. That I have already explained in my previous blog of Word2Vec.
    • Global statistics are the statistics that are computed across the whole text corpus.

Why Glove is better than Word2Vec?

Since word2vec is a best performing model why not use it? The reason doesn’t lie in performance, but the fundamentals of the solution formulation. Remember that, Word2vec relies only on local statistics of language. That is, the semantics learnt for a given word, is only affected by the surrounding words.

How Glove works?

Before beginning we have to know why glove model derive semantic relationships between words from the co-occurrence matrix and what is cooccurance matrix.

  • Co-occurance matrix is the matrix that shows the co-occurrence of words. That means how many time some word in a text is repeated with respect to other words in text.

    • For example take a two sentences, "I love NLP" and "I love to make videos" and its co-occurance matrix is given as:

      context

    • Now lets take pair of words with respect to "I" that means "I Love" which is repeated two times in two sentence, so we can see in above matrix with respect to "I", "Love" comes two times and similarly all the information is stored in a matrix.

  • Then How Glove Model utilizes it?

    • Now lets know about a mathematical expression used in the Glove Paper:

      glove1

    • This expression shows j is a context word, for example x_I,love = 2. This means word love is repeated 2 times with respect to "I"

  • Now to explain the power of co-occurence matrix, lets know about the probabilistic equation and it is given as:

    glove2

    • Here we can see numerator contains the same expression as we explain and denominator consist the expression that denotes summing all column value of a particular rows.
  • Now from Glove paper:

    glove3

    • From above table we can see that probability of k with respect to ice and of k with respect to solid. We can see we set k equals solid, gas, water and random word.
    • Now we devide one probality by other and keep the result in a tabular format which shows amazing results. We can see ratio of probability of solid with respect to ice and probability of solid with respect to steam is high because solid and ice is nearly similar and that has a higher probability.

Mathematics behind Glove

I am using the following notation which is slightly different from the paper due to difficulties rendering Latex on Jupyter Notebook.

  • w, u — Two separate embedding layers.
  • w* — Transpose of w
  • X — co-occurrence matrix
  • bw and bu — Biases of w and u respectively

There are three issues to be solved to arrive at the word embedding using Glove.

  • We don’t have an equation, e.g. F(i,j,k) = P_ik/P_jk, but just an expression.
  • Word vectors are high-dimensional vectors, however P_ik/P_jk is a scalar. So there’s a dimensional mismatch.
  • There are three entities involved (i, j, and k). But computing loss function with three elements can get hairy, and needs to be reduced to two.
  • Solving issue one is easy by writing the expression. Assume that there is a function F which takes in word vectors of i,j and k which outputs the ratio we’re interested in.

    F(w_i,w_j, u_k) = P_ik/P_jk

    • Here we are using only two words embedding layers , i.e. w and u why? The paper says, often both these layers will perform equivalently and will only differ by the different random initialization. However having two layers help the model to reduce overfitting.
  • Word vectors are linear systems. For example, you can perform arithmetic in embedding space, e.g.

    w{king} — w{male} + w{female} = w{queen} so now change above expression as:

    • F(w_i — w_j, u_k) = P_ik/P_j
  • Now solving second issue

    • F(w_i — w_j, u_k) = P_ik/P_j

      • In above expression we have to change left hand side part into scaler because it's a vector but right hand side expression is scaler. So this is done by introducing a transpose and a dot product between the two entities the following way.

        F((w_i — w_j)* . u_k) = P_ik/P_jk

        • If you assume a word vector as a Dx1 matrix, (w_i — w_j)* will be 1xD shaped which gives a scalar when multiplied with u_k.
    • Now how we choose function F, if we assume F has a certain property (i.e. homomorphism between additive group and the multiplicative group) which gives,

      F(w_i u_k — w_j u_k) = F(w_i u_k)/F(w_j u_k) = P_ik/P_jk

      • In other words this particular homomorphism ensures that the subtraction F(A-B) can also be represented as a division F(A)/F(B) and get the same result. And therefore,

        F(w_i u_k)/F(w_j u_k) = P_ik/P_jk and

        F(w_i* u_k) = P_ik

      • If we assume F=exp the above homomorphism property is satisfied. Then let us set,

        Exp(w_i* u_k) = P_ik=X_ik/X_i

        and

        w_i* u_k = log(X_ik) — log(X_i)

        Next, X_i is independent of k, we move log(X_i) to LHS,

        w_i* u_k + log(X_i)= log(X_ik)

        Now given that we do not have a bias yet in the equation, we’ll get a bit creative and express log(X_i) in neural network parlance we get,

        w_i* u_k + bw_i +bu_k — log(X_ik) = 0

        where bw and bu are biases of the network.

Now comes cost function

In an ideal setting, where you have perfect word vectors, the above expression will be zero. In other words, that’s our goal or objective. So we will be setting the LHS expression as our cost function.

J(w_i, w_j)= (w_i* u_j + bw_i +bu_j — log(X_ij))²

The square makes this a mean square cost function. In above expression k has been replaced with j.

Above expression is not a finalized cost function since if X_ij is zero , then log(0) will be infinite and thats we have to solve. So to solve this problem we use the expression as in place of log(X_ij) we put log(1+X_ij) which is known as laplacian smoothening. But in a GloVe paper propose a sleeker way of doing this. That is to introduce a weighting function.

J = f(X_ij)(w_i^T u_j + bw_i +bu_j — log(X_ij))²

where f(Xij) = (x/x{max})^a if x < x_{max} else 0

Code Implementation of Glove

import numpy as np
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import gensim.downloader as api

Downloading the word vectors

glove_model = api.load('glove-wiki-gigaword-300')
[==================================================] 100.0% 376.1/376.1MB downloaded

Vector representation of a word

glove_model["beautiful"]
array([-2.3852e-01, -3.3704e-01, -2.6531e-01, -3.4693e-01, -1.2875e-01,
        1.0698e-01, -2.4669e-02,  2.6829e-02, -1.0024e-01, -9.1494e-01,
        4.8989e-01, -1.0362e-01, -2.4535e-01,  5.7003e-01, -3.5189e-02,
       -3.7690e-01, -6.9177e-02, -2.5942e-01,  2.2511e-01,  5.1855e-01,
       -3.9326e-01,  1.2665e+00, -4.4396e-01,  7.6827e-01, -1.0909e-02,
       -2.4237e-01, -1.0275e-01, -4.9712e-01, -8.0183e-02,  4.0611e-01,
        2.1275e-01,  6.5182e-01, -1.1487e+00,  4.6080e-03, -6.8265e-01,
        9.6941e-01, -7.6194e-02, -4.1590e-01, -2.7069e-01, -5.3364e-02,
       -7.9631e-02, -2.6670e-01, -2.1544e-02,  2.2392e-01,  5.5741e-02,
        8.4611e-02,  8.6011e-01,  4.8927e-01,  4.7428e-01, -2.4446e-01,
       -2.1553e-01,  1.4370e-01,  5.0683e-01, -5.8435e-01, -5.6243e-01,
       -6.3757e-02, -2.2861e-01, -6.9374e-02,  5.6618e-01, -8.7813e-02,
       -2.1272e-01, -1.6319e-01,  3.3545e-01,  1.3707e-01, -1.1920e-03,
        9.7461e-02,  3.2382e-01, -2.2693e-01, -1.7767e-01,  4.0166e-02,
       -5.3029e-01, -4.5809e-01, -2.3483e-01,  3.1453e-01,  1.7206e-01,
       -6.9996e-02,  2.1818e-01, -1.3370e-01,  7.0031e-02, -5.1293e-01,
       -5.9242e-01,  2.6683e-01,  3.4211e-02, -2.3073e-01,  9.2629e-02,
        9.7956e-01, -1.7105e-01,  3.4766e-01,  1.5655e-01,  1.6473e-01,
       -4.8657e-02, -3.3195e-01,  4.0701e-02, -3.6882e-01,  3.8325e-02,
        1.4471e-01,  4.5228e-01, -5.3237e-01,  1.6001e-01,  9.1856e-02,
       -3.3670e-02, -2.8456e-01,  2.7661e-01,  2.5678e-01, -5.0600e-01,
        9.0557e-02,  2.3590e-01, -2.3907e-01, -1.0190e-01, -4.3150e-01,
       -1.9739e-01,  3.4452e-01,  3.3246e-01, -8.2128e-02,  2.3898e-01,
        2.8935e-02,  3.4182e-01,  6.4785e-01,  4.4846e-02,  2.3185e-01,
       -9.0600e-02,  3.2501e-01, -1.1690e-01,  6.3490e-01, -3.9302e-02,
       -1.9762e-01, -1.1636e-01,  6.4526e-01, -6.8176e-01, -2.7499e-01,
        2.3495e-01,  3.8022e-01, -7.2129e-02,  3.2216e-01, -6.3217e-01,
       -1.3036e-01, -7.2367e-02, -1.8482e-01, -7.8929e-02,  1.2480e-01,
        9.6149e-02,  4.8628e-02, -5.9320e-02, -1.5919e-01, -2.1533e-01,
       -3.8724e-01,  3.5391e-01,  3.4231e-01, -3.9314e-01, -1.1976e-01,
       -3.7050e-01, -1.2089e-01, -5.8203e-03, -3.3442e-01,  6.4367e-01,
       -2.2489e-01, -4.5688e-01,  1.8812e-02,  1.7772e-01, -1.5363e-01,
        4.2730e-02, -3.4811e-01,  6.1017e-01,  3.0632e-01, -4.0521e-01,
        1.1642e-02,  8.0483e-05,  1.9665e-01,  2.7749e-01, -2.7826e-01,
       -2.8165e-01, -1.7904e-01, -3.9776e-01,  2.9140e-01,  8.6537e-02,
       -5.2711e-02, -2.4818e-01,  1.3174e-01, -5.0422e-01, -1.7553e-01,
       -5.0302e-02, -6.6879e-01,  4.8007e-01,  2.3588e-02,  3.8455e-01,
       -2.0443e-01,  3.2373e-01, -2.6863e-01, -1.1948e-03,  4.1770e-01,
       -2.8839e-01, -5.8236e-02, -1.5103e-01, -5.2364e-02, -4.4363e-01,
        1.8137e-01, -4.0447e-01, -4.2684e-01, -3.0427e-01,  3.6178e-01,
        1.5595e+00, -3.3639e-01, -9.7822e-02, -1.7268e-02,  6.5117e-02,
       -3.8777e-01,  5.7876e-02,  4.3497e-01, -3.1166e-01, -2.7618e-01,
       -1.7773e-01,  3.3641e-01, -1.0508e-01, -3.1227e-01,  3.9182e-01,
       -3.7915e-02,  2.5229e-01, -6.6904e-01,  1.0371e-01,  1.7643e-01,
        2.5485e-01, -3.6815e-02,  1.7848e-01,  8.2182e-02, -6.1077e-01,
        2.0832e-01,  4.1189e-01, -2.0953e-01, -5.2351e-01, -4.5922e-02,
        1.0356e-01, -1.1626e-01, -2.3241e-01, -4.1366e-01, -5.6315e-02,
        4.5747e-01, -2.9707e-01, -1.6137e-01, -3.3410e-01, -3.1331e-01,
        3.3484e-01,  1.7417e-01, -4.1686e-01,  4.8983e-01, -1.7848e-01,
        4.7937e-01, -3.0127e-01,  4.2611e-01,  1.9762e-01,  3.4076e-01,
        2.6479e-01, -5.3770e-01, -1.0298e-01, -3.8824e-02,  7.3822e-01,
        3.3278e-02,  1.1207e-01,  7.8605e-02,  1.3025e-01, -3.6788e-01,
       -3.6885e-01, -4.0836e-01, -1.6628e-01, -2.1534e-01, -7.3451e-02,
       -3.4754e-01, -8.6115e-03, -2.1517e-01,  4.9213e-01,  2.8894e-01,
        1.9182e-01, -5.3703e-01,  1.5176e-02, -1.9287e-02,  1.2511e-01,
        2.9509e-01, -1.0003e+00,  1.0112e-01, -1.3583e-01, -3.6766e-01,
       -3.1532e-01,  3.9986e-01, -7.4484e-02, -1.6293e-01, -6.4623e-01,
        1.8405e-01, -2.3892e-01,  3.5487e-01, -2.8264e-01, -3.4756e-01,
        1.9120e-01,  7.6232e-02, -4.6812e-01,  3.9841e-01,  1.2330e-01,
       -2.5784e-01,  4.5218e-01,  3.2891e-01,  3.7239e-02,  2.3779e-01],
      dtype=float32)

Model Understanding meaning of word vectors

glove_model.most_similar("girl")
[('boy', 0.8272891044616699),
 ('woman', 0.729641854763031),
 ('girls', 0.7227291464805603),
 ('teenager', 0.650977373123169),
 ('teenage', 0.6492719650268555),
 ('mother', 0.6417974829673767),
 ('boys', 0.6283578872680664),
 ('child', 0.6229295134544373),
 ('teen', 0.612524151802063),
 ('daughter', 0.6050207614898682)]

QUEEN-GIRL+BOY = KING

This is the most amazing result we get from this model, here we are subtracting embedding of word girl with embedding queen. Then the kingship quality of the word queen is remained and when added with the embedding of word boy it gives the result king since that is logical. How amazing.

glove_model.most_similar(positive=['boy', 'queen'], negative=['girl'], topn=1)
[('king', 0.6770139336585999)]

Visualizing The Word Embedding Using TSNE

we can see from the plot that their is a sense of gender for human type vocabulary and it clearly separates human and non human objects.

vocab = ["boy", "girl", "man", "woman", "king", "queen", "banana", "apple", "mango", "fruit", "coconut", "orange"]

def tsne_plot(model):
    labels = []
    wordvecs = []

    for word in vocab:
        wordvecs.append(model[word])
        labels.append(word)
    
    tsne_model = TSNE(perplexity=3, n_components=2, init='pca', random_state=42)
    coordinates = tsne_model.fit_transform(wordvecs)

    x = []
    y = []
    for value in coordinates:
        x.append(value[0])
        y.append(value[1])
        
    plt.figure(figsize=(8,8)) 
    for i in range(len(x)):
        plt.scatter(x[i],y[i])
        plt.annotate(labels[i],
                     xy=(x[i], y[i]),
                     xytext=(2, 2),
                     textcoords='offset points',
                     ha='right',
                     va='bottom')
    plt.show()

tsne_plot(glove_model)
/usr/local/lib/python3.7/dist-packages/sklearn/manifold/_t_sne.py:793: FutureWarning: The default learning rate in TSNE will change from 200.0 to 'auto' in 1.2.
  FutureWarning,
/usr/local/lib/python3.7/dist-packages/sklearn/manifold/_t_sne.py:986: FutureWarning: The PCA initialization in TSNE will change to have the standard deviation of PC1 equal to 1e-4 in 1.2. This will ensure better convergence.
  FutureWarning,

Summary

Here in this blogpost we see how Glove model works and see mathematical intuition behind the algorithm and its code implementation.