我正在尝试使用Keras TensorFlow创建一个用于检测异常值的自编码器神经网络,我的数据是一列文本,每行一个单词,具体如下:https://pastebin.com/hEvm6qWg,共有139行。
当我用我的数据拟合模型时,我得到了以下错误:
ValueError: Error when checking input: expected input_1 to have shape (139,) but got array with shape (140,)
但我无法理解为什么它识别为形状为140的数组,我的完整代码如下:
from keras import Input, Modelfrom keras.layers import Densefrom keras.preprocessing.text import Tokenizerwith open('drawables.txt', 'r') as arquivo: dados = arquivo.read().splitlines()tokenizer = Tokenizer(filters='')tokenizer.fit_on_texts(dados)x_dados = tokenizer.texts_to_matrix(dados, mode="freq")tamanho = len(tokenizer.word_index)x = Input(shape=(tamanho,))# Encoderhidden_1 = Dense(tamanho, activation='relu')(x)h = Dense(tamanho, activation='relu')(hidden_1)# Decoderhidden_2 = Dense(tamanho, activation='relu')(h)r = Dense(tamanho, activation='sigmoid')(hidden_2)autoencoder = Model(input=x, output=r)autoencoder.compile(optimizer='adam', loss='mse')autoencoder.fit(x_dados, epochs=5, shuffle=False)
我完全迷失了,我甚至不知道我的自编码器网络方法是否正确,我做错了什么?
回答:
Tokenizer
中的word_index
是从1开始的,而不是从0开始
示例:
tokenizer = Tokenizer(filters='')tokenizer.fit_on_texts(["this a cat", "this is a dog"])print (tokenizer.word_index)
输出:
{'this': 1, 'a': 2, 'cat': 3, 'is': 4, 'dog': 5}
索引是从1开始的,而不是从0开始。因此,当我们使用这些索引创建term frequency
矩阵时
x_dados = tokenizer.texts_to_matrix(["this a cat", "this is a dog"], mode="freq")
x_dados
的形状将是2x6
,因为numpy数组是从0开始索引的。
因此,x_dados
中的列数为1+len(tokenizer.word_index)
所以要修复你的代码,将
tamanho = len(tokenizer.word_index)
改为
tamanho = len(tokenizer.word_index) + 1
工作样本:
dados = ["this is a cat", "that is a dog and a cat"]*100tokenizer = Tokenizer(filters='')tokenizer.fit_on_texts(dados)x_dados = tokenizer.texts_to_matrix(dados, mode="freq")tamanho = len(tokenizer.word_index)+1x = Input(shape=(tamanho,))# Encoderhidden_1 = Dense(tamanho, activation='relu')(x)h = Dense(tamanho, activation='relu')(hidden_1)# Decoderhidden_2 = Dense(tamanho, activation='relu')(h)r = Dense(tamanho, activation='sigmoid')(hidden_2)autoencoder = Model(input=x, output=r)print (autoencoder.summary())autoencoder.compile(optimizer='adam', loss='mse')autoencoder.fit(x_dados, x_dados, epochs=5, shuffle=False)