几个月来代码一直运行正常,但在我做了一些操作后不知为何出了问题,无法恢复原状。
def bi_LSTM_model(X_train, y_train, X_test, y_test, num_classes, loss,batch_size=68, units=128, learning_rate=0.005,epochs=20, dropout=0.2, recurrent_dropout=0.2): class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('acc') > 0.90): print("\nReached 90% accuracy so cancelling training!") self.model.stop_training = True callbacks = myCallback() model = tf.keras.models.Sequential() model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2]))) model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout))) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=loss, optimizer=adamopt, metrics=['accuracy']) history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_test, y_test), verbose=1, callbacks=[callbacks]) score, acc = model.evaluate(X_test, y_test, batch_size=batch_size) yhat = model.predict(X_test) return history, yhatdef duo_bi_LSTM_model(X_train, y_train, X_test, y_test, num_classes, loss,batch_size=68, units=128, learning_rate=0.005,epochs=20, dropout=0.2, recurrent_dropout=0.2): class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('acc') > 0.90): print("\nReached 90% accuracy so cancelling training!") self.model.stop_training = True callbacks = myCallback() model = tf.keras.models.Sequential() model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2]))) model.add(Bidirectional( LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout, return_sequences=True))) model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout))) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=loss, optimizer=adamopt, metrics=['accuracy']) history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_test, y_test), verbose=1, callbacks=[callbacks]) score, acc = model.evaluate(X_test, y_test, batch_size=batch_size) yhat = model.predict(X_test) return history, yhat
基本上,我定义了两个模型,每当第二个模型运行时就会出现错误。
顺便提一下,我在模型之间使用了tf.keras.backend.clear_session()
。
ValueError: Tensor("Adam/bidirectional/forward_lstm/kernel/m:0", shape=(), dtype=resource) must be from the same graph as Tensor("bidirectional/forward_lstm/kernel:0", shape=(), dtype=resource).
我对代码唯一做的修改是尝试将callback
类从两个模型中提取出来,放在它们之前,以减少代码的冗余性。
回答:
问题不在于回调函数。错误出现的原因是你将同一个优化器传递给了两个不同的模型,这是不可能的,因为它们是两个不同的计算图。
尝试在定义模型的函数中,在model.compile()
调用之前定义优化器。