我正在尝试提高我的代码性能,我想对数据框的两列进行分词处理,之前我的代码是这样的
submission_df['question1'] = submission_df.apply(lambda row: nltk.word_tokenize(row['question1']), axis=1)submission_df['question2'] = submission_df.apply(lambda row: nltk.word_tokenize(row['question2']), axis=1)
我想也许我可以将它们合并在一行中,这样我只需遍历所有行一次(200万行),所以我想到了这样的代码
submission_df['question1'],submission_df['question2'] = submission_df.apply (lambda row: (nltk.word_tokenize(row['question1']), nltk.word_tokenize(row['question2'])), axis=1)
但这行不通,也许还有其他方法可以改进,而不仅仅是使用apply方法。
回答:
你可以简单地对选定的列使用apply
方法,并使用astype(str),例如
submission_df[['question1','question2']]=submission_df[['question1','question2']].astype(str).apply(lambda row: [nltk.word_tokenize(row['question1']),nltk.word_tokenize(row['question2'])], axis=1)
示例:
import nltkdf = pd.DataFrame({"A":["Nice to meet you ","Nice to meet you ","Nice to meet you ",8,9,10],"B":[7,6,7,"Nice to meet you ","Nice to meet you ","Nice to meet you "]})df[['A','B']] = df[['A','B']].astype(str).apply(lambda row: [nltk.word_tokenize(row['A']),nltk.word_tokenize(row['B'])], axis=1)
输出:
A B0 [Nice, to, meet, you] [7]1 [Nice, to, meet, you] [6]2 [Nice, to, meet, you] [7]3 [8] [Nice, to, meet, you]4 [9] [Nice, to, meet, you]5 [10] [Nice, to, meet, you]