I want to use TfidfVectorizer
for extracting bigrams
. But extending stopwords list does not works with bigrams. How can I fix this problem?
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction import text
import pandas as pd
content = CORPUS
my_stop_words = text.ENGLISH_STOP_WORDS.union(['don know', 'good morning', 'happy birthday'])
vectorizer = TfidfVectorizer(stop_words=my_stop_words, max_features=25, ngram_range=(2, 2))
X = vectorizer.fit_transform(content).todense()
df = pd.DataFrame(X, columns=vectorizer.get_feature_names())
df.to_csv('test.csv')
I got this warning and nothing changed as a result:
Your stop_words may be inconsistent with your preprocessing. Tokenizing the stop words generated tokens ['birthday', 'don', ...] not in stop_words.
question from:
https://stackoverflow.com/questions/65921730/how-to-extend-the-stopwords-list-with-bigrams 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…