出于好奇,我尝试使用TensorFlow构建一个简单的全连接神经网络来学习如下方波函数:
因此,输入是一个一维的x值数组(作为水平轴),输出是一个二进制标量值。我使用了tf.nn.sparse_softmax_cross_entropy_with_logits作为损失函数,并使用tf.nn.relu作为激活函数。网络有3个隐藏层(100*100*100),以及单个输入节点和输出节点。输入数据是生成的,以匹配上述波形,因此数据大小不是问题。
然而,训练后的模型似乎完全失败了,总是预测负类。
所以我在试图找出这是为什么。是因为神经网络的配置不够优化,还是因为神经网络底层的一些数学缺陷(尽管我认为神经网络应该能够模仿任何函数)。
谢谢。
根据评论部分的建议,这里是完整的代码。我之前说错了一件事,实际上有2个输出节点(因为有2个输出类):
""" See if neural net can find piecewise linear correlation in the data"""import timeimport osimport tensorflow as tfimport numpy as npdef generate_placeholder(batch_size): x_placeholder = tf.placeholder(tf.float32, shape=(batch_size, 1)) y_placeholder = tf.placeholder(tf.float32, shape=(batch_size)) return x_placeholder, y_placeholderdef feed_placeholder(x, y, x_placeholder, y_placeholder, batch_size, loop): x_selected = [[None]] * batch_size y_selected = [None] * batch_size for i in range(batch_size): x_selected[i][0] = x[min(loop*batch_size, loop*batch_size % len(x)) + i, 0] y_selected[i] = y[min(loop*batch_size, loop*batch_size % len(y)) + i] feed_dict = {x_placeholder: x_selected, y_placeholder: y_selected} return feed_dictdef inference(input_x, H1_units, H2_units, H3_units): with tf.name_scope('H1'): weights = tf.Variable(tf.truncated_normal([1, H1_units], stddev=1.0/2), name='weights') biases = tf.Variable(tf.zeros([H1_units]), name='biases') a1 = tf.nn.relu(tf.matmul(input_x, weights) + biases) with tf.name_scope('H2'): weights = tf.Variable(tf.truncated_normal([H1_units, H2_units], stddev=1.0/H1_units), name='weights') biases = tf.Variable(tf.zeros([H2_units]), name='biases') a2 = tf.nn.relu(tf.matmul(a1, weights) + biases) with tf.name_scope('H3'): weights = tf.Variable(tf.truncated_normal([H2_units, H3_units], stddev=1.0/H2_units), name='weights') biases = tf.Variable(tf.zeros([H3_units]), name='biases') a3 = tf.nn.relu(tf.matmul(a2, weights) + biases) with tf.name_scope('softmax_linear'): weights = tf.Variable(tf.truncated_normal([H3_units, 2], stddev=1.0/np.sqrt(H3_units)), name='weights') biases = tf.Variable(tf.zeros([2]), name='biases') logits = tf.matmul(a3, weights) + biases return logitsdef loss(logits, labels): labels = tf.to_int32(labels) cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='xentropy') return tf.reduce_mean(cross_entropy, name='xentropy_mean')def inspect_y(labels): return tf.reduce_sum(tf.cast(labels, tf.int32))def training(loss, learning_rate): tf.summary.scalar('lost', loss) optimizer = tf.train.GradientDescentOptimizer(learning_rate) global_step = tf.Variable(0, name='global_step', trainable=False) train_op = optimizer.minimize(loss, global_step=global_step) return train_opdef evaluation(logits, labels): labels = tf.to_int32(labels) correct = tf.nn.in_top_k(logits, labels, 1) return tf.reduce_sum(tf.cast(correct, tf.int32))def run_training(x, y, batch_size): with tf.Graph().as_default(): x_placeholder, y_placeholder = generate_placeholder(batch_size) logits = inference(x_placeholder, 100, 100, 100) Loss = loss(logits, y_placeholder) y_sum = inspect_y(y_placeholder) train_op = training(Loss, 0.01) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) max_steps = 10000 for step in range(max_steps): start_time = time.time() feed_dict = feed_placeholder(x, y, x_placeholder, y_placeholder, batch_size, step) _, loss_val = sess.run([train_op, Loss], feed_dict = feed_dict) duration = time.time() - start_time if step % 100 == 0: print('Step {}: loss = {:.2f} {:.3f}sec'.format(step, loss_val, duration)) x_test = np.array(range(1000)) * 0.001 x_test = np.reshape(x_test, (1000, 1)) _ = sess.run(logits, feed_dict={x_placeholder: x_test}) print(min(_[:, 0]), max(_[:, 0]), min(_[:, 1]), max(_[:, 1])) print(_)if __name__ == '__main__': population = 10000 input_x = np.random.rand(population) input_y = np.copy(input_x) for bin in range(10): print(bin, bin/10, 0.5 - 0.5*(-1)**bin) input_y[input_x >= bin/10] = 0.5 - 0.5*(-1)**bin batch_size = 1000 input_x = np.reshape(input_x, (population, 1)) run_training(input_x, input_y, batch_size)
样本输出显示,模型总是优先选择第一类而不是第二类,如min(_[:, 0])
> max(_[:, 1])
所示,即对于样本大小population
,第一类的最小逻辑输出高于第二类的最大逻辑输出。
我的错误。问题出现在以下代码行:
for i in range(batch_size): x_selected[i][0] = x[min(loop*batch_size, loop*batch_size % len(x)) + i, 0] y_selected[i] = y[min(loop*batch_size, loop*batch_size % len(y)) + i]
Python会将x_selected
的整个列表变为相同的值。现在这个代码问题已经解决。修复方法是:
x_selected = np.zeros((batch_size, 1))y_selected = np.zeros((batch_size,))for i in range(batch_size): x_selected[i, 0] = x[(loop*batch_size + i) % x.shape[0], 0] y_selected[i] = y[(loop*batch_size + i) % y.shape[0]]
在修复后,模型显示出更多的变化。目前,当x <= 0.5时输出类0,当x > 0.5时输出类1。但这仍然远非理想状态。
因此,在将网络配置更改为100个节点*4层后,经过100万次训练步骤(批量大小=100,样本大小=1000万),模型表现非常好,仅在y值翻转的边缘处出现错误。因此,这个问题已经关闭。
回答:
你本质上是在尝试学习一个周期函数,该函数是高度非线性和非平滑的。所以它看起来并不简单。简而言之,更好的输入特征表示会有所帮助。
假设你的周期T = 2
,f(x) = f(x+2)
。当输入/输出为整数时,你的问题简化为f(x) = 1 if x is odd else -1.
在这种情况下,你的问题将简化为这个讨论,其中我们训练一个神经网络来区分奇数和偶数。
我认为该帖子的第二个要点应该对一般情况(当输入为浮点数时)也有帮助。
尝试使用固定长度精度将数字表示为二进制。
在我们上面的简化问题中,很容易看出输出是确定的,当且仅当最低有效位是已知的。
decimal binary -> output1: 0 0 1 -> 12: 0 1 0 -> -13: 0 1 1 -> 1...