你好,我正在尝试使用TensorFlow进行逻辑回归(如果我的代码看起来很傻,请原谅),我用NumPy和TensorFlow分别编写了成本函数,对于相同的初始权重,我得到了不同的结果,有人能帮我吗?
import numpy as npimport tensorflow as tfimport matplotlib.pyplot as pltfrom sklearn.datasets.samples_generator import make_blobsDataSize=1000data, y = make_blobs(n_samples=1000, centers=2, n_features=2,random_state=1,center_box=(-5.0,5.0))plt.scatter(data[:,0],data[:,1])plt.show(block=False)x=np.linspace(-1,5,1000)b=np.ones([1,1])W=np.ones([2,1])asd=W*x.T+bpred=np.dot(data,W)+bplt.plot(x,asd[0])plt.show(block=False)result=((1))/(1+np.exp(-pred))s=np.log(result)J=-(y.T.dot(s)+(1-y).T.dot(1-s))/1000print ("cost in numpy",J)#with tf.variable_scope("scopi",reuse=True): X = tf.placeholder(tf.float32 ) Y = tf.placeholder(tf.float32 ) b = tf.Variable(tf.ones((1,1)),name="bias") W = tf.Variable(tf.ones((1,2)),name="weights") ypred=W*X+b hx=tf.reduce_sum(tf.sigmoid(ypred),reduction_indices=1) #cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1)) J=-tf.reduce_sum(tf.mul(tf.transpose(Y),hx)+tf.mul(tf.transpose(1-Y),(1-hx)))/1000 opti=tf.train.AdamOptimizer(0.1).minimize(J)with tf.Session() as session: session.run(tf.initialize_all_variables()) h = session.run(J, feed_dict={X: data, Y: y}) print ("cost in tensorflow", h)# epoch = 100 # for i in range(epoch): # for j in range(DataSize): # session.run(opti, feed_dict={X: data[j], Y: y[j]}) # # # # # # if i%10==0: # # a=session.run(J,feed_dict={X:data,Y:y}) # # print ("cost ", a)
成本函数的成本样本:
(‘cost in numpy’, array([ 2.37780175])) (‘cost in tensorflow’, 0.073667422)
回答:
你在这一行将权重初始化为随机值:
session.run(tf.initialize_all_variables())
在那行之后,你可以像这样设置值:
session.run(tf.assign(b,tf.ones((1,2))))