这个用Python定义的简单神经网络如下:
input_layer = tf.placeholder(tf.float32, shape=[1, 1, 8000, 1], name='input_layer')# 卷积层conv = tf.layers.conv2d( inputs=input_layer, filters=32, kernel_size=[1,11], padding="same", strides=1, activation=tf.nn.relu)# 输出层logits = tf.layers.dense(inputs=conv, units=5, name='logit')
生成的图形拓扑结构如下:
0 input_layer Placeholder1 conv2d/kernel/Initializer/random_uniform/shape Const2 conv2d/kernel/Initializer/random_uniform/min Const3 conv2d/kernel/Initializer/random_uniform/max Const4 conv2d/kernel/Initializer/random_uniform/RandomUniform RandomUniform└─── Input0 ─ conv2d/kernel/Initializer/random_uniform/shape5 conv2d/kernel/Initializer/random_uniform/sub Sub└─── Input0 ─ conv2d/kernel/Initializer/random_uniform/max└─── Input1 ─ conv2d/kernel/Initializer/random_uniform/min6 conv2d/kernel/Initializer/random_uniform/mul Mul└─── Input0 ─ conv2d/kernel/Initializer/random_uniform/RandomUniform└─── Input1 ─ conv2d/kernel/Initializer/random_uniform/sub7 conv2d/kernel/Initializer/random_uniform Add└─── Input0 ─ conv2d/kernel/Initializer/random_uniform/mul└─── Input1 ─ conv2d/kernel/Initializer/random_uniform/min8 conv2d/kernel VariableV29 conv2d/kernel/Assign Assign└─── Input0 ─ conv2d/kernel└─── Input1 ─ conv2d/kernel/Initializer/random_uniform10 conv2d/kernel/read Identity└─── Input0 ─ conv2d/kernel11 conv2d/bias/Initializer/zeros Const12 conv2d/bias VariableV213 conv2d/bias/Assign Assign└─── Input0 ─ conv2d/bias└─── Input1 ─ conv2d/bias/Initializer/zeros14 conv2d/bias/read Identity└─── Input0 ─ conv2d/bias15 conv2d/convolution/Shape Const16 conv2d/convolution/dilation_rate Const17 conv2d/convolution Conv2D└─── Input0 ─ input_layer└─── Input1 ─ conv2d/kernel/read18 conv2d/BiasAdd BiasAdd└─── Input0 ─ conv2d/convolution└─── Input1 ─ conv2d/bias/read19 conv2d/Relu Relu└─── Input0 ─ conv2d/BiasAdd20 logit/kernel/Initializer/random_uniform/shape Const21 logit/kernel/Initializer/random_uniform/min Const22 logit/kernel/Initializer/random_uniform/max Const23 logit/kernel/Initializer/random_uniform/RandomUniform RandomUniform└─── Input0 ─ logit/kernel/Initializer/random_uniform/shape24 logit/kernel/Initializer/random_uniform/sub Sub└─── Input0 ─ logit/kernel/Initializer/random_uniform/max└─── Input1 ─ logit/kernel/Initializer/random_uniform/min25 logit/kernel/Initializer/random_uniform/mul Mul└─── Input0 ─ logit/kernel/Initializer/random_uniform/RandomUniform└─── Input1 ─ logit/kernel/Initializer/random_uniform/sub26 logit/kernel/Initializer/random_uniform Add└─── Input0 ─ logit/kernel/Initializer/random_uniform/mul└─── Input1 ─ logit/kernel/Initializer/random_uniform/min27 logit/kernel VariableV228 logit/kernel/Assign Assign└─── Input0 ─ logit/kernel└─── Input1 ─ logit/kernel/Initializer/random_uniform29 logit/kernel/read Identity└─── Input0 ─ logit/kernel30 logit/bias/Initializer/zeros Const31 logit/bias VariableV232 logit/bias/Assign Assign└─── Input0 ─ logit/bias└─── Input1 ─ logit/bias/Initializer/zeros33 logit/bias/read Identity└─── Input0 ─ logit/bias34 logit/Tensordot/transpose/perm Const35 logit/Tensordot/transpose Transpose└─── Input0 ─ conv2d/Relu└─── Input1 ─ logit/Tensordot/transpose/perm36 logit/Tensordot/Reshape/shape Const37 logit/Tensordot/Reshape Reshape└─── Input0 ─ logit/Tensordot/transpose└─── Input1 ─ logit/Tensordot/Reshape/shape38 logit/Tensordot/transpose_1/perm Const39 logit/Tensordot/transpose_1 Transpose└─── Input0 ─ logit/kernel/read└─── Input1 ─ logit/Tensordot/transpose_1/perm40 logit/Tensordot/Reshape_1/shape Const41 logit/Tensordot/Reshape_1 Reshape└─── Input0 ─ logit/Tensordot/transpose_1└─── Input1 ─ logit/Tensordot/Reshape_1/shape42 logit/Tensordot/MatMul MatMul└─── Input0 ─ logit/Tensordot/Reshape└─── Input1 ─ logit/Tensordot/Reshape_143 logit/Tensordot/shape Const44 logit/Tensordot Reshape└─── Input0 ─ logit/Tensordot/MatMul└─── Input1 ─ logit/Tensordot/shape45 logit/BiasAdd BiasAdd└─── Input0 ─ logit/Tensordot└─── Input1 ─ logit/bias/read
在节点36和44之间的reshape操作的目的是什么?我正在使用Snapdragon Neural Processing Engine (SNPE),它不允许reshape操作。有没有办法在不使用reshape操作的情况下表达这个模型?
回答:
所有这些reshape
操作都是由tf.tensordot
添加的,它被tf.layers.dense
用于高维输入(在你的例子中是4D)。根据其文档:
注意:如果输入张量的秩大于2,那么在进行初始矩阵乘法之前会先将其展平。
如果你的环境中不希望进行reshape操作,可以尝试手动定义权重和偏置,并通过tf.matmul
应用dot
乘积。