我想了解以下两种情况的区别,
from tensorflow.keras.preprocessing.text import Tokenizersentences = [ 'i love my dog', 'I, love my cat', 'You love my dog!']tokenizer = Tokenizer(num_words = 1)tokenizer.fit_on_texts(sentences)word_index = tokenizer.word_indexprint(word_index)
输出 – {'love': 1, 'my': 2, 'i': 3, 'dog': 4, 'cat': 5, 'you': 6}
与
from tensorflow.keras.preprocessing.text import Tokenizersentences = [ 'i love my dog', 'I, love my cat', 'You love my dog!']tokenizer = Tokenizer(num_words = 100)tokenizer.fit_on_texts(sentences)word_index = tokenizer.word_indexprint(word_index)
输出 – {'love': 1, 'my': 2, 'i': 3, 'dog': 4, 'cat': 5, 'you': 6}
如果分词器会动态地为所有独特的单词分配索引,那么num_words
有什么用呢?
回答:
word_index 只是一个将单词映射到整个文本语料库的id的简单映射,无论num_words是多少
它们的区别在使用中显而易见。例如,如果我们调用texts_to_sequences
sentences = [ 'i love my dog', 'I, love my cat', 'You love my dog!']tokenizer = Tokenizer(num_words = 1+1)tokenizer.fit_on_texts(sentences)tokenizer.texts_to_sequences(sentences) # [[1], [1], [1]]
只返回了最常见单词“love”的id
相反
sentences = [ 'i love my dog', 'I, love my cat', 'You love my dog!']tokenizer = Tokenizer(num_words = 100+1)tokenizer.fit_on_texts(sentences)tokenizer.texts_to_sequences(sentences) # [[3, 1, 2, 4], [3, 1, 2, 5], [6, 1, 2, 4]]
返回了最常见的100个单词的id