当我使用tf.kears.layer
函数式API构建我的实验模型时,我遇到了如下的GraphDisconnected错误:
ValueError Traceback (most recent call last)
in ()35 outputs = x36—> 37 model = tf.keras.Model(inputs=inputs, outputs=outputs)38 model.summary()
4 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.pyin _map_graph_network(inputs, outputs)988 ‘The following previous layers ‘989 ‘were accessed without issue: ‘ +–> 990 str(layers_with_complete_input))991 for x in nest.flatten(node.outputs):992 computable_tensors.add(id(x))
ValueError: Graph disconnected: cannot obtain value for tensorTensor(“input_63:0″, shape=(?, 32, 32, 32), dtype=float32) at layer”tf_op_layer_Pow_105”. The following previous layers were accessedwithout issue: []
为了理解这个错误,我查看了可能的Stack Overflow帖子,但未能解决错误。我认为这是由于某些层之间的形状不匹配导致的。我仔细检查了每层的形状,但错误仍然存在。我不确定是什么引起了这个问题。有人能建议解决这个错误的可能方法吗?有什么快速解决方案吗?
更新:我的完整代码尝试:
def my_func(x): n = 2 c = tf.constant([1, -1/6], dtype=tf.float32) p = tf.constant([1,3], dtype=tf.float32) W,H, C = x.shape[1:].as_list() inputs = tf.keras.Input(shape=(W,H,C)) xx = inputs res = [] for i in range(n): m = c[i] * tf.math.pow(xx, p[i]) res.append(m) csum = tf.math.cumsum(res) csum_tr = tf.transpose(csum, perm=[1, 2, 3, 4, 0]) new_x = tf.reshape(csum_tr, tf.constant([-1, W, H, C*n])) return new_xinputs = tf.keras.Input(shape=(32,32,3))conv_1 = Conv2D(64, kernel_size = (3, 3), padding='same')(inputs)BN_1 = BatchNormalization(axis=-1)(conv_1)pool_1 = MaxPooling2D(strides=(1,1), pool_size=(3,3), padding='same')(BN_1)z0 = my_func(pool_1)conv_2 = Conv2D(64, kernel_size = (3, 3), padding='same')(z0)BN_2 = BatchNormalization(axis=-1)(conv_2)pool_2 = MaxPooling2D(strides=(1,1), pool_size=(3,3), padding='same')(BN_2)z1 = my_func(pool_2)merged_2 = concatenate([z0, z1], axis=-1)act_2 = Activation('tanh')(merged_2)x = Conv2D(64, kernel_size = (3, 3), padding='same', activation='relu')(act_2)x = BatchNormalization(axis=-1)(x)x = Activation('relu')(x)x = MaxPooling2D(pool_size=(3,3))(x)x = Dropout(0.1)(x)x = Flatten()(x)x = Dense(128)(x)x = BatchNormalization()(x)x = Activation('tanh')(x)x = Dropout(0.1)(x)x = Dense(10)(x)x = Activation('softmax')(x)outputs = xmodel = tf.keras.Model(inputs=inputs, outputs=outputs)model.summary()
谁能指出是什么导致了这个问题?我应该如何修复上面的图形断开连接错误?有什么快速的想法吗?谢谢!
回答:
这是编写自定义函数的正确方法… 没有必要使用额外的Input
层
def my_func(x): n = 2 c = tf.constant([1, -1/6], dtype=tf.float32) p = tf.constant([1,3], dtype=tf.float32) W, H, C = x.shape[1:].as_list() res = [] for i in range(n): m = c[i] * tf.math.pow(x, p[i]) res.append(m) csum = tf.math.cumsum(res) csum_tr = tf.transpose(csum, perm=[1, 2, 3, 4, 0]) new_x = tf.reshape(csum_tr, tf.constant([-1, W, H, C*n])) return new_x
你可以在网络中简单地使用Lambda
层来应用它
inputs = tf.keras.Input(shape=(32,32,3))conv_1 = Conv2D(64, kernel_size = (3, 3), padding='same')(inputs)BN_1 = BatchNormalization(axis=-1)(conv_1)pool_1 = MaxPooling2D(strides=(1,1), pool_size=(3,3), padding='same')(BN_1)z0 = Lambda(my_func)(pool_1) ## <=================conv_2 = Conv2D(64, kernel_size = (3, 3), padding='same')(z0)BN_2 = BatchNormalization(axis=-1)(conv_2)pool_2 = MaxPooling2D(strides=(1,1), pool_size=(3,3), padding='same')(BN_2)z1 = Lambda(my_func)(pool_2) ## <=================merged_2 = concatenate([z0, z1], axis=-1)act_2 = Activation('tanh')(merged_2)x = Conv2D(64, kernel_size = (3, 3), padding='same', activation='relu')(act_2)x = BatchNormalization(axis=-1)(x)x = Activation('relu')(x)x = MaxPooling2D(pool_size=(3,3))(x)x = Dropout(0.1)(x)x = Flatten()(x)x = Dense(128)(x)x = BatchNormalization()(x)x = Activation('tanh')(x)x = Dropout(0.1)(x)x = Dense(10)(x)x = Activation('softmax')(x)outputs = xmodel = tf.keras.Model(inputs=inputs, outputs=outputs)model.summary()