我想在我的数据集中去除无意义词。
我在StackOverflow上看到了一些类似这样的方法并尝试了:
import nltkwords = set(nltk.corpus.words.words())sent = "Io andiamo to the beach with my amico."" ".join(w for w in nltk.wordpunct_tokenize(sent) \ if w.lower() in words or not w.isalpha())
但现在我有一个数据框,我该如何遍历整个列呢?
我尝试了类似这样的方法:
import nltkwords = set(nltk.corpus.words.words())sent = df['Chats']df['Chats'] = df['Chats'].apply(lambda w:" ".join(w for w in nltk.wordpunct_tokenize(sent) \ if w.lower() in words or not w.isalpha()))
但我得到了一个错误 TypeError: expected string or bytes-like object
回答:
像下面这样的代码将生成一个名为Clean
的新列,并将你的函数应用于Chats
列
words = set(nltk.corpus.words.words())def clean_sent(sent): return " ".join(w for w in nltk.wordpunct_tokenize(sent) \ if w.lower() in words or not w.isalpha())df['Clean'] = df['Chats'].apply(clean_sent)
要更新Chats
列本身,可以使用原始列覆盖它:
df['Chats'] = df['Chats'].apply(clean_sent)