自定义LSTM时遇到的形状错误

我一直在尝试自定义一个LSTM层以进行进一步改进。但是在我的自定义LSTM之后的池化层出现了一个看似正常的错误。

我的环境是:

  • Windows 10
  • Keras 2.2.0
  • Python 3.6

Traceback (most recent call last):
File “E:/PycharmProjects/dialogResearch/dialog/classifier.py”, line 60, in
model = build_model(word_dict, args.max_len, args.max_sents, args.embedding_dim)
File “E:\PycharmProjects\dialogResearch\dialog\model\keras_himodel.py”, line 177, in build_model
l_dense = TimeDistributed(Dense(200))(l_lstm)
File “C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py”, line 592, in call
self.build(input_shapes[0])
File “C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\wrappers.py”, line 162, in build
assert len(input_shape) >= 3
AssertionError

我的自定义LSTM的代码如下:

class CustomLSTM(Layer):    def __init__(self, output_dim, return_sequences, **kwargs):        self.init = initializers.get('normal')        # self.input_spec = [InputSpec(ndim=3)]        self.output_dim = output_dim        self.return_sequences = return_sequences        super(CustomLSTM, self).__init__(**kwargs)    def build(self, input_shape):        assert len(input_shape) == 3        self.original_shape = input_shape        self.Wi = self.add_weight('Wi', (input_shape[-1], self.output_dim), initializer=self.init, trainable=True)        self.Wf = self.add_weight('Wf', (input_shape[-1], self.output_dim), initializer=self.init, trainable=True)        self.Wo = self.add_weight('Wo', (input_shape[-1], self.output_dim), initializer=self.init, trainable=True)        self.Wu = self.add_weight('Wu', (input_shape[-1], self.output_dim), initializer=self.init, trainable=True)        self.Ui = self.add_weight('Ui', (self.output_dim, self.output_dim), initializer=self.init, trainable=True)        self.Uf = self.add_weight('Uf', (self.output_dim, self.output_dim), initializer=self.init, trainable=True)        self.Uo = self.add_weight('Uo', (self.output_dim, self.output_dim), initializer=self.init, trainable=True)        self.Uu = self.add_weight('Uu', (self.output_dim, self.output_dim), initializer=self.init, trainable=True)        self.bi = self.add_weight('bi', (self.output_dim,), initializer=self.init, trainable=True)        self.bf = self.add_weight('bf', (self.output_dim,), initializer=self.init, trainable=True)        self.bo = self.add_weight('bo', (self.output_dim,), initializer=self.init, trainable=True)        self.bu = self.add_weight('bu', (self.output_dim,), initializer=self.init, trainable=True)        super(CustomLSTM, self).build(input_shape)    def step_op(self, step_in, states):        i = K.softmax(K.dot(step_in, self.Wi) + K.dot(states[0], self.Ui) + self.bi)        f = K.softmax(K.dot(step_in, self.Wf) + K.dot(states[0], self.Uf) + self.bf)        o = K.softmax(K.dot(step_in, self.Wo) + K.dot(states[0], self.Uo) + self.bo)        u = K.tanh(K.dot(step_in, self.Wu) + K.dot(states[0], self.Uu) + self.bu)        c = i * u + f * states[1]        h = o * K.tanh(c)        return h, [h, c]    def call(self, x, mask=None):        init_states = [K.zeros((K.shape(x)[0], self.output_dim)),                       K.zeros((K.shape(x)[0], self.output_dim))]         outputs = K.rnn(self.step_op, x, init_states)        if self.return_sequences:            return outputs[1]        else:            return outputs[0]    def compute_output_shape(self, input_shape):        return input_shape[0], input_shape[-1]

模型代码如下:

def build_model(words, max_len, max_sents, embedding_dim):    sentence_input = Input(shape=(max_len,), dtype='int32')    embedding_layer = Embedding(len(words) + 1,                                embedding_dim,                                input_length=max_len,                                trainable=True)    embedded_sequences = embedding_layer(sentence_input)    l_lstm = CustomLSTM(200, return_sequences=True)(embedded_sequences)    print(l_lstm.get_shape())    l_dense = TimeDistributed(Dense(200))(l_lstm)    l_att = AttLayer()(l_dense)    sentEncoder = Model(sentence_input, l_att)    review_input = Input(shape=(max_sents, max_len), dtype='int32')    review_encoder = TimeDistributed(sentEncoder)(review_input)    l_lstm_sent = CustomLSTM(200, return_sequences=True)(review_encoder)    l_dense_sent = TimeDistributed(Dense(200))(l_lstm_sent)    l_att_sent = AttLayer()(l_dense_sent)    preds = Dense(3, activation='softmax')(l_att_sent)    model = Model(review_input, preds)    optimizer = Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08, schedule_decay=0.004)    model.compile(loss='categorical_crossentropy',                  optimizer=optimizer,                  metrics=[precision, recall, f1, 'acc'])    return model

感谢您的帮助。


回答:

我认为错误发生的原因是当return_sequences=True时,compute_output_shape返回的形状不正确。我会尝试以下方法:

def compute_output_shape(self, input_shape):    if self.return_sequences:        return input_shape    return (input_shape[0], input_shape[-1])

Related Posts

Keras Dense层输入未被展平

这是我的测试代码: from keras import…

无法将分类变量输入随机森林

我有10个分类变量和3个数值变量。我在分割后直接将它们…

如何在Keras中对每个输出应用Sigmoid函数?

这是我代码的一部分。 model = Sequenti…

如何选择类概率的最佳阈值?

我的神经网络输出是一个用于多标签分类的预测类概率表: …

在Keras中使用深度学习得到不同的结果

我按照一个教程使用Keras中的深度神经网络进行文本分…

‘MatMul’操作的输入’b’类型为float32,与参数’a’的类型float64不匹配

我写了一个简单的TensorFlow代码,但不断遇到T…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注