我尝试使用tensorflow重建tf.contrib.learn中的DNNRegressor模型,但我的损失值高出六个数量级。有人能指导我正确的方向吗?我完全不知道哪里出了问题或者有什么不同之处:/ 如果有帮助的话,数据在这里 http://pastebin.com/BG6r6EF6
tf.contrib.learn 代码:
data = np.loadtxt('training.csv', delimiter=',',skiprows=1,usecols = (3,4,5,6,7,8,9,10,11,12,13,14,15,16,17) ,dtype=np.float32)X_ = data[:,:-1]Y_ = data[:,-1]feature_columns = [tf.contrib.layers.real_valued_column("", dimension=14)]classifier = tf.contrib.learn.DNNRegressor(feature_columns=feature_columns, hidden_units=[7], optimizer=tf.train.RMSPropOptimizer(learning_rate=.001), activation_fn=tf.nn.relu)classifier.fit(x=X_, y=Y_, max_steps=1000)
tensorflow 代码:
data = np.loadtxt('training.csv', delimiter=',',skiprows=1,usecols = (3,4,5,6,7,8,9,10,11,12,13,14,15,16,17) ,dtype=np.float32)n_features = 14hidden_units = 7n_classes = 1lr = .001X = tf.placeholder(tf.float32,[None,n_features])Y = tf.placeholder(tf.float32,[None])W = tf.Variable(tf.truncated_normal([n_features,hidden_units]))W2 = tf.Variable(tf.truncated_normal([hidden_units,n_classes])) b = tf.Variable(tf.zeros([hidden_units]))b2 = tf.Variable(tf.zeros([n_classes]))hidden1 = tf.nn.relu(tf.matmul(X,W) + b)pred = tf.matmul(hidden1,W2)+b2#我尝试了几种平方误差损失的变体,但没有成功 loss = tf.nn.l2_loss(pred - Y) #loss = tf.reduce_sum(tf.pow(pred - Y,2))/(2*n_instances) #loss = tf.reduce_mean(tf.squared_difference(pred, Y)) #loss = tf.reduce_sum(tf.pow(pred - Y,2))/(2*n_instances)optimizer = tf.train.RMSPropOptimizer(lr).minimize(loss)with tf.Session() as sess: init = tf.global_variables_initializer() sess.run(init) for step in range(1000): _, loss_value = sess.run([optimizer,loss], feed_dict={X: X_,Y: Y_} )
更新
我改成了
loss = tf.reduce_mean(tf.squared_difference(pred, Y))
现在两种方法的损失值大致相同(约200)。然而,tensorflow模型的准确性非常低,而DNNRegressor在使用验证数据时输出的结果符合我的预期。Tensorboard的图表也非常不同。
回答:
我会使用tensorboard来比较两个模型的图表。你试过吗?