Aux_input = Input(shape=(wrd_temp.shape[1],1), dtype='float32')#shape (,200)Main_input = Input(shape=(wrdvec.shape[1],),dtype='float32')#shape(,367)X = Bidirectional(LSTM(20,return_sequences=True))(Aux_input)X = Dropout(0.2)(X)X = Bidirectional(LSTM(28,return_sequences=True))(X)X = Dropout(0.2)(X)X = Bidirectional(LSTM(28,return_sequences=False))(X)Aux_Output = Dense(Opt_train.shape[1], activation= 'softmax' )(X)#total 22 classesx = keras.layers.concatenate([Main_input,Aux_Output],axis=1)x = tf.reshape(x,[1,389,1])#here 389 is the shape of the new input i.e.(Main_input+Aux_Output)x = Bidirectional(LSTM(20,return_sequences=True))(x)x = Dropout(0.2)(x)x = Bidirectional(LSTM(28,return_sequences=True))(x)x = Dropout(0.2)(x)x = Bidirectional(LSTM(28,return_sequences=False))(x)Main_Output = Dense(Opt_train.shape[1], activation= 'softmax' )(x)model = Model(inputs=[Aux_input,Main_input], outputs= [Aux_Output,Main_Output])
错误发生在声明模型的行,即 model = Model(),在这里发生了属性错误。如果我的实现中有其他错误,请在评论部分记录并通知我。
回答:
问题在于,每次使用tf
操作时都应该被以下之一封装:
- 使用
keras.backend
函数, Lambda
层,- 具有相同行为的指定
keras
函数。
当你使用tf
操作时,你会得到tf
张量对象,而这个对象没有history
字段。当你使用keras
函数时,你会得到keras.tensor
。