Word2Vec
All about how Word2Vec method works to extract an embeddings from a textual data.
- All about Word2Vec
- How Word2Vec works?
- Code Implementatiion of Word2Vec
- Model Understanding the Meaning of Word
- QUEEN-GIRL+BOY = KING
- Visualizing The Word Embedding Using TSNE
All about Word2Vec
We have learned about the two methods previously which extract the numerical features from the textual data, they are BOW and TF-IDF. But their main drawbacks is they did not store the semantic relationship and they generate lots of feature which sometimes cumbersome to use.
To solve this problem very efficiently, there is Word2Vec method, which creates word embeddings. Word embeddings are an integral part of solving many problems in NLP. They depict how humans understand language to a machine. You can imagine them as a vectorized representation of text. Word2Vec, a common method of generating word embeddings, has a variety of applications such as text similarity, recommendation systems, sentiment analysis, etc.
Before we get into word2vec, let’s establish an understanding of what word embeddings are. This is important to know because the overall result and output of word2vec will be embeddings associated to each unique word passed through the algorithm.
Word embeddings is a technique where individual words are transformed into a numerical representation of the word (a vector). Where each word is mapped to one vector, this vector is then learned in a way which resembles a neural network. The vectors try to capture various characteristics of that word with regard to the overall text. These characteristics can include the semantic relationship of the word, definitions, context, etc. With these numerical representations, you can do many things like identify similarity or dissimilarity between words.
Clearly, these are integral as inputs to various aspects of machine learning. A machine cannot process text in its raw form, thus converting the text into an embedding will allow users to feed the embedding to classic machine learning models. The simplest embedding would be a one hot encoding of text data where each vector would be mapped to a category.
How Word2Vec works?
Word2Vec is a recent breakthrough in the world of NLP. Tomas Mikolov a Czech computer scientist and currently a researcher at CIIRC ( Czech Institute of Informatics, Robotics and Cybernetics) was one of the leading contributors towards the research and implementation of word2vec. It has two types they are:
-
Continous Bag of Words Model
-
In this method context word is given to the model and it generates a vector which represents the target word, which is shown in figure below. Windows length(how far should we look for a context word) is set which is the number of text to be taken into consideration, and this selected word according to window length is send to the enbedding layer which has the dimension as vocab size * dimension(basically 300) and it generates the embedding vector of context word that is fed to model.
-
Now we will take the average of all embeddings that is generated from the context word by feeding embeddings to average layer. This layer generates a vector which is the average of all embeddings of context word.
-
This averaged embedding is passed to the softmax layer to produce a predicted word. Here now by comparing with actual word we can calculate the loss and backpropagate to get a near perfect weight which will generates near perfect predicted word.
-
-
Skipgram Model
- In this model we have to pass the target word by pairing with some random context word and then the best pair will give the word embedding.
- This word embedding is now feed to a merge layer where dot operation takes place between the embeddings of target word and context word which gives the single embedding.
- After this we feed our embeddings to the sigmoid layer to produce the label as 0 or 1. 1 is for the pair which is perfect match and 0 for the pair which is not perfect match.
- Now we calculate the loss and backpropagate to get a near perfect weight which will help to predict the pair near perfectly.
import numpy as np
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import gensim.downloader as api
word2vec_model = api.load('word2vec-google-news-300')
word2vec_model["beautiful"]
word2vec_model.most_similar("girl")
QUEEN-GIRL+BOY = KING
This is the most amazing result we get from this model, here we are subtracting embedding of word girl with embedding queen. Then the kingship quality of the word queen is remained and when added with the embedding of word bor it gives the result king since that is logical. How amazing.
word2vec_model.most_similar(positive=['boy', 'queen'], negative=['girl'], topn=1)
vocab = ["boy", "girl", "man", "woman", "king", "queen", "banana", "apple", "mango", "fruit", "coconut", "orange"]
def tsne_plot(model):
labels = []
wordvecs = []
for word in vocab:
wordvecs.append(model[word])
labels.append(word)
tsne_model = TSNE(perplexity=3, n_components=2, init='pca', random_state=42)
coordinates = tsne_model.fit_transform(wordvecs)
x = []
y = []
for value in coordinates:
x.append(value[0])
y.append(value[1])
plt.figure(figsize=(8,8))
for i in range(len(x)):
plt.scatter(x[i],y[i])
plt.annotate(labels[i],
xy=(x[i], y[i]),
xytext=(2, 2),
textcoords='offset points',
ha='right',
va='bottom')
plt.show()
tsne_plot(word2vec_model)