在TensorFlow中,如何将2个参数传入session.run()

我正在尝试学习TensorFlow,并试图对初学者示例进行一些修改。

我正在尝试将从头开始实现神经网络深度MNIST专家教程结合起来。

我使用X, y = sklearn.datasets.make_moons(50, noise=0.20)获取数据。基本上,这行代码会生成2D的X(,)和2类别的Y(0/1)。

x = tf.placeholder(tf.float32, shape=[50,2])y_ = tf.placeholder(tf.float32, shape=[50,2])

网络的结构与《深度MNIST专家教程》相同。不同之处在于会话运行函数。

sess.run(train_step, feed_dict={x:X, y_:y})

但这会导致

_ValueError: setting an array element with a sequence._

谁能给我一些关于这个问题的提示?这是代码。

import numpy as npimport matplotlibimport tensorflow as tfimport matplotlib.pyplot as pltimport sklearnimport sklearn.datasetsimport sklearn.linear_modelsess = tf.InteractiveSession()matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)np.random.seed(0)X, y = sklearn.datasets.make_moons(50, noise=0.20)plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral)clf = sklearn.linear_model.LogisticRegressionCV()clf.fit(X, y)batch_xs = np.vstack([np.expand_dims(k,0) for k in X])x = tf.placeholder(tf.float32, shape=[50,2])y_ = tf.placeholder(tf.float32, shape=[50,2])W = tf.Variable(tf.zeros([2,2]))b = tf.Variable(tf.zeros([2]))a = np.arange(100).reshape((50, 2))y = tf.nn.softmax(tf.matmul(x,W) + b)cross_entropy = -tf.reduce_sum(y_*tf.log(y))train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)sess.run(tf.initialize_all_variables())for i in range(20000):sess.run(train_step, feed_dict={x:X, y_:y})

经过与TensorFlow的斗争后,这是正确的代码:

# 包导入import numpy as npimport matplotlibimport tensorflow as tfimport matplotlib.pyplot as pltimport sklearnimport sklearn.datasetsimport sklearn.linear_modelrng = np.randominput_dim = 2output_dim = 2hidden_dim = 3np.random.seed(0)Train_X, Train_Y = sklearn.datasets.make_moons(200, noise=0.20)Train_X = np.reshape(Train_X, (-1,2))Train_YY = []  for i in Train_Y:       # 将Train_Y转换为2-D列表    if i == 1:        Train_YY.append([1,0])    else:        Train_YY.append([0,1])print Train_YYX = tf.placeholder("float",shape=[None,input_dim])Y = tf.placeholder("float")W1 = tf.Variable(tf.random_normal([input_dim, hidden_dim], stddev=0.35),                      name="weights")b1 = tf.Variable(tf.zeros([1,hidden_dim]), name="bias1")a1 = tf.tanh(tf.add(tf.matmul(X,W1),b1))W2 = tf.Variable(tf.random_normal([hidden_dim,output_dim]), name="weight2")b2 = tf.Variable(tf.zeros([1,output_dim]), name="bias2")a2 = tf.add(tf.matmul(a1, W2), b2)output=tf.nn.softmax(a2)correct_prediction = tf.equal(tf.argmax(output,1), tf.argmax(Y,1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))cross_entropy = -tf.reduce_sum(Y*tf.log(output))optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)with tf.Session() as sess:    sess.run(tf.initialize_all_variables())    for i in range(20000):        # for (a,d) in zip(Train_X, Train_Y):        training_cost = sess.run(optimizer, feed_dict={X:Train_X, Y:Train_YY})        if i%1000 == 0:            # print "Training cost=", training_cost, "W1=", W1.eval(), "b1=", b1.eval(),"W2=", W2.eval(), "b2=", b2.eval()            # print output.eval({X:Train_X, Y:Train_YY})            # print cross_entropy.eval({X:Train_X, Y:Train_YY})            print "Accuracy = ", accuracy.eval({X:Train_X, Y:Train_YY}) 

回答:

问题出现在你重新定义y的这行代码上:

y = tf.nn.softmax(tf.matmul(x,W) + b)

TensorFlow因此会报错,因为在feed_dict中将y_: y传给另一个张量是不可能的(即使可能,这样做也会产生循环依赖!)。

解决方案是重写你的softmax和交叉熵操作:

y_softmax = tf.nn.softmax(tf.matmul(x,W) + b)cross_entropy = -tf.reduce_sum(y_*tf.log(y_softmax))

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注