TF-IDF
All about how TF-IDF method works to extract a feature from a textual data.
Why TF-IDF?
We have seen how bag of words works in our previous blogpost BOW which is one method to extract a numerical feature from a textual data but with limitation that BOW does not keep semantic meaning of the data. So, we have to use TF-IDF to extract a feature from a textual data which contain some sort of semantic relationships.
How TF-IDF works?
TF-IDF is a method to extract a numerical feature from a textual data. It is based on two factors one is term frequency(TF) and other is inverse document frequency(IDF).
- Term Frequency is given as:- It measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear much more times in long documents than shorter ones.
-
Inverse Document Frequency is given as:
-
Inverse Document Frequency, which measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following:
-
-
Finally after calculating term frequency and inverse document frequency we multiply them both to get the final TF-IDF and this is how we get TF-IDF of textual data.
-
Let's take three sentences and calculate their TF-IDF:
- sentence 1: "He is a good boy."
- sentence 2: "She is a good girl."
- sentence 3: "Boy and girl are good."
- Now after we preprocess each sentences it becomes:
- sentence 1: "good, boy"
- sentence 2: "good girl"
- sentence 3: "boy, girl, good"
-
So here it is calculated:
-
TF calculation
-
IDF calculation
-
TF-IDF calculation
-
-
This is how numerical feature is calculated in TF-IDF and it is better than BOW which gives importance to some words.
import nltk
import re
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
from sklearn.feature_extraction.text import TfidfVectorizer
paragraph = """I have three visions for India. In 3000 years of our history, people from all over
the world have come and invaded us, captured our lands, conquered our minds.
From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,
the French, the Dutch, all of them came and looted us, took over what was ours.
Yet we have not done this to any other nation. We have not conquered anyone.
We have not grabbed their land, their culture,
their history and tried to enforce our way of life on them.
Why? Because we respect the freedom of others.That is why my
first vision is that of freedom. I believe that India got its first vision of
this in 1857, when we started the War of Independence. It is this freedom that
we must protect and nurture and build on. If we are not free, no one will respect us.
My second vision for India's development. For fifty years we have been a developing nation.
It is time we see ourselves as a developed nation. We are among the top 5 nations of the world
in terms of GDP. We have a 10 percent growth rate in most areas. Our poverty levels are falling.
Our achievements are being globally recognised today. Yet we lack the self-confidence to
see ourselves as a developed nation, self-reliant and self-assured. Isn't this incorrect?
I have a third vision. India must stand up to the world. Because I believe that unless India
stands up to the world, no one will respect us. Only strength respects strength. We must be
strong not only as a military power but also as an economic power. Both must go hand-in-hand.
My good fortune was to have worked with three great minds. Dr. Vikram Sarabhai of the Dept. of
space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material.
I was lucky to have worked with all three of them closely and consider this the great opportunity of my life.
I see four milestones in my career"""
ps = PorterStemmer()
wordnet=WordNetLemmatizer()
sentences = nltk.sent_tokenize(paragraph)
corpus = []
for i in range(len(sentences)):
review = re.sub('[^a-zA-Z]', ' ', sentences[i])
review = review.lower()
review = review.split()
review = [wordnet.lemmatize(word) for word in review if not word in set(stopwords.words('english'))]
review = ' '.join(review)
corpus.append(review)
cv = TfidfVectorizer()
X = cv.fit_transform(corpus).toarray()
print(X)