Tensorflow: ValueError: 无法为形状为(423,)的张量’Placeholder:0’提供值,该张量形状为'(?, 423)’

我是一个机器学习新手,正在通过这个教程学习TensorFlow –

在下面的代码中,我可以计算epoch损失,但无法计算准确率。

import tensorflow as tffrom wordsnlp import create_feature_sets_and_labelsimport numpy as nptrain_x,train_y,test_x,test_y = create_feature_sets_and_labels('pos.txt','neg.txt')n_nodes_hl1 = 500n_classes = 2batch_size = 100x = tf.placeholder('float',[None,len(train_x[0])])y = tf.placeholder('float')#(input_data*weights) + biasesdef neural_network_model(data):    hidden_1_layer = {'weights': tf.Variable(tf.random_normal([len(train_x[0]),n_nodes_hl1])),                      'biases': tf.Variable(tf.random_normal([n_nodes_hl1]))}    return outputdef neural_network_model(data):hidden_1_layer = {'weights': tf.Variable(tf.random_normal([len(train_x[0]),n_nodes_hl1])),                  'biases': tf.Variable(tf.random_normal([n_nodes_hl1]))}hidden_2_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1,n_nodes_hl2])),                  'biases': tf.Variable(tf.random_normal([n_nodes_hl2]))}hidden_3_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl2,n_nodes_hl3])),                  'biases': tf.Variable(tf.random_normal([n_nodes_hl3]))}output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl3,n_classes])),                  'biases': tf.Variable(tf.random_normal([n_classes]))}l1= tf.add(tf.matmul(data, hidden_1_layer['weights']) , hidden_1_layer['biases'])l1 = tf.nn.relu(l1)l2= tf.add(tf.matmul(l1, hidden_2_layer['weights']) , hidden_2_layer['biases'])l1 = tf.nn.relu(l2)l3= tf.add(tf.matmul(l2, hidden_2_layer['weights']) , hidden_2_layer['biases'])l1 = tf.nn.relu(l3)output = tf.matmul(l3, output_layer['weights']) + output_layer['biases']return outputdef train_neural_network(x):    prediction = neural_network_model(x)    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))    optimizer = tf.train.AdamOptimizer().minimize(cost)    hm_epochs = 10    with tf.Session() as sess:        sess.run(tf.initialize_all_variables())        for epoch in range(hm_epochs):            epoch_loss=0            i=0            while i < len(train_x):                start = i                end = i + batch_size                batch_x = np.array(train_x[start:end])                batch_y = np.array(train_y[start:end])                _,c = sess.run([optimizer,cost] , feed_dict = {x: batch_x , y : batch_y})                epoch_loss+= c                i+= batch_size            print("Epoch",epoch , 'completed out of ' ,hm_epochs, ' loss: ', epoch_loss )        correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y,1))        accuracy = tf.reduce_mean(tf.cast(correct, 'float'))        print('Accuracy: ', accuracy.eval({x:test_x , y: test_y}))train_neural_network(x)

当我计算准确率时,代码(我已简化)出现了以下错误:

ValueError: 无法为形状为(423,)的张量’Placeholder:0’提供值,该张量形状为'(?, 423)’

您能指出问题出在哪里吗?提前感谢您。


回答:

首先,您的代码不完整,请检查neural_network_model函数。

无论如何,下面的代码可以运行。目前我只使用了一个网络层,您可以在neural_network_model函数中添加更多层。确保n_classesneural_network_model函数中output的大小相同。

现在先运行下面的代码,然后再更新neural_network_model函数。

import tensorflow as tfimport numpy as npimport randomimport nltkfrom nltk.tokenize import word_tokenizeimport numpy as npimport randomimport picklefrom collections import Counterfrom nltk.stem import WordNetLemmatizerlemmatizer = WordNetLemmatizer()hm_lines = 100000def create_lexicon(pos,neg):    lexicon = []    with open(pos,'r') as f:        contents = f.readlines()        for l in contents[:hm_lines]:            all_words = word_tokenize(l)            lexicon += list(all_words)    with open(neg,'r') as f:        contents = f.readlines()        for l in contents[:hm_lines]:            all_words = word_tokenize(l)            lexicon += list(all_words)    lexicon = [lemmatizer.lemmatize(i) for i in lexicon]    w_counts = Counter(lexicon)    l2 = []    for w in w_counts:        #print(w_counts[w])        if 1000 > w_counts[w] > 50:            l2.append(w)    print(len(l2))    return l2def sample_handling(sample,lexicon,classification):    featureset = []    with open(sample,'r') as f:        contents = f.readlines()        for l in contents[:hm_lines]:            current_words = word_tokenize(l.lower())            current_words = [lemmatizer.lemmatize(i) for i in current_words]            features = np.zeros(len(lexicon))            for word in current_words:                if word.lower() in lexicon:                    index_value = lexicon.index(word.lower())                    features[index_value] += 1            features = list(features)            featureset.append([features,classification])    return featuresetdef create_feature_sets_and_labels(pos,neg,test_size = 0.1):    lexicon = create_lexicon(pos,neg)    features = []    features += sample_handling('pos.txt',lexicon,[1,0])    features += sample_handling('neg.txt',lexicon,[0,1])    random.shuffle(features)    features = np.array(features)    testing_size = int(test_size*len(features))    train_x = list(features[:,0][:-testing_size])    train_y = list(features[:,1][:-testing_size])    test_x = list(features[:,0][-testing_size:])    test_y = list(features[:,1][-testing_size:])    return train_x,train_y,test_x,test_ytrain_x,train_y,test_x,test_y = create_feature_sets_and_labels('pos.txt','neg.txt')n_nodes_hl1 = 2n_classes = 2batch_size = 100x = tf.placeholder('float',[None,len(train_x[0])])y = tf.placeholder('float')#(input_data*weights) + biasesdef neural_network_model(data):    hidden_1_layer = {'weights': tf.Variable(tf.random_normal([len(train_x[0]),n_nodes_hl1])),                      'biases': tf.Variable(tf.random_normal([n_nodes_hl1]))}    output = tf.matmul(data,hidden_1_layer['weights']) + hidden_1_layer['biases']    return outputdef train_neural_network(x):    prediction = neural_network_model(x)    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))    optimizer = tf.train.AdamOptimizer().minimize(cost)    hm_epochs = 1    with tf.Session() as sess:        sess.run(tf.initialize_all_variables())        for epoch in range(hm_epochs):            epoch_loss=0            i=0            while i < len(train_x):                start = i                end = i + batch_size                batch_x = np.array(train_x[start:end])                batch_y = np.array(train_y[start:end])                _,c = sess.run([optimizer,cost] , feed_dict = {x: batch_x , y : batch_y})                epoch_loss+= c                i+= batch_size            print("Epoch",epoch , 'completed out of ' ,hm_epochs, ' loss: ', epoch_loss )        correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y,1))        accuracy = tf.reduce_mean(tf.cast(correct, 'float'))        print('Accuracy: ', accuracy.eval({x:test_x , y: test_y}))train_neural_network(x)

注意:代码在其他层面上存在缺陷,但这不是本问题的重点,我从您指出的位置获取了缺失的函数

编辑2:

我想我不应该鼓励你犯这些愚蠢的错误,这是我最后一次修复这些问题。你又一次在同一个函数中搞砸了。你在发布到Stack Overflow之前,必须完全检查你的代码,这样你才能确定你提出的问题是正确的,而不是因为一些愚蠢的小错误。

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注