Tensorflow: ValueError: 形状 (None, 1) 和 (None, 2) 不兼容

我对神经网络和机器学习非常新手,但我已经为云和无云创建了训练数据,并使用了我之前用于手势项目中的相同模型。当我使用这个模型时,最初遇到了一个类似的错误消息,它显示为:

ValueError: Shapes (64, 10) and (64, 4) are incompatible

我之前在一些手势代码中使用过相同的模型。我记得之前也遇到过类似的错误消息,因为训练数据只有4个选项,而我的模型试图适应10个选项,从我的理解来看。我不得不将最终的神经元数量从10改为4。

import numpy as np import matplotlib.pyplot as plt import osimport cv2import randomfrom tqdm import tqdmimport tensorflow as tf from keras import layersfrom keras import models#加载训练数据DATADIR = "D:/Python_Code/Machine_learning/Cloud_Trainin_Data"CATEGORIES = ["Clouds", "Normal_Terrain"]training_data = []for category in CATEGORIES:  # 处理云和无云    path = os.path.join(DATADIR,category)  # 创建云和无云的路径    class_num = CATEGORIES.index(category)  # 获取分类(0或1)。0=云,1=无云    for img in tqdm(os.listdir(path)):  # 遍历每个图片               img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE)  # 转换为数组        height = 1000        dim = None        (h, w) = img_array.shape[:2]        r = height / float(h)        dim = (int(w * r), height)        resized = cv2.resize(img_array, dim, interpolation = cv2.INTER_AREA)        training_data.append([resized, class_num])  # 将其添加到训练数据中print(len(training_data))random.shuffle(training_data)for sample in training_data[:10]:    print(sample[1])X = []y = []for features,label in training_data:    X.append(features)    y.append(label)hh,ww = resized.shapeX = np.array(X).reshape(-1, hh, ww, 1)y = np.array(y)#归一化数据X = X/255.0#构建模型model=models.Sequential()model.add(layers.Conv2D(32, (5, 5), strides=(2, 2), activation='relu', input_shape=X.shape[1:])) model.add(layers.MaxPooling2D((2, 2)))model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2)))model.add(layers.Conv2D(64, (3, 3), activation='relu'))model.add(layers.MaxPooling2D((2, 2)))model.add(layers.Flatten())model.add(layers.Dense(128, activation='relu'))model.add(layers.Dense(2, activation='softmax'))model.compile(optimizer='rmsprop',              loss='categorical_crossentropy',              metrics=['accuracy'])EPOCHS = 1history = model.fit(X, y, batch_size = 5, epochs=EPOCHS, validation_split=0.1)accuray, loss = model.evaluate(X, y)print(accuray, loss)model.save('Clouds.model')loss = history.history["loss"]acc = history.history["accuracy"]epoch = np.arange(EPOCHS)plt.plot(epoch, loss)# plt.plot(epoch, val_loss)plt.plot(epoch, acc, color='red')plt.xlabel('Epochs')plt.ylabel('Accuracy')plt.xlabel('Epochs')plt.ylabel('Loss')plt.title('Training Loss')plt.legend(['train', 'val'])plt.show()

然而,在这种情况下,当我应用相同的解决方案时,错误消息仍然出现。

有什么建议吗?


回答:

看起来你在尝试实现一个二分类问题。正如@Tfer2建议的,将损失函数categorical_crossentropy更改为binary_crossentropy

工作样例代码

import tensorflow as tfimport numpy as npfrom tensorflow.keras import datasetsimport tensorflow.keras as kerastrain_images, test_images = train_images / 255.0, test_images / 255.0#input_shape=(X_train.shape[0],X_train.shape[1],X_train.shape[2])input_shape = (32, 32, 3)model = tf.keras.Sequential()#第一层model.add(keras.layers.Conv2D(32,(3,3),activation='relu',input_shape=input_shape))model.add(keras.layers.MaxPool2D((3,3),strides=(2,2),padding='same'))#第二层model.add(keras.layers.Conv2D(64,(3,3),activation='relu',input_shape=input_shape))model.add(keras.layers.MaxPool2D((3,3),strides=(2,2),padding='same'))#第三层model.add(keras.layers.Conv2D(64,(2,2),activation='relu',input_shape=input_shape))model.add(keras.layers.MaxPool2D((2,2),strides=(2,2),padding='same'))#平坦化model.add(keras.layers.Flatten())model.add(keras.layers.Dense(128,activation='relu'))model.add(keras.layers.Dropout(0.3))#输出层model.add(keras.layers.Dense(2,activation='softmax'))model.compile(optimizer='rmsprop',              loss='binary_crossentropy',              metrics=['accuracy'])model.fit(train_images,train_labels,validation_data=(test_images,test_labels),batch_size=32,epochs=50)

输出

Epoch 1/501563/1563 [==============================] - 25s 10ms/step - loss: 1.8524 - accuracy: 0.3163 - val_loss: 1.5800 - val_accuracy: 0.4311Epoch 2/501563/1563 [==============================] - 15s 9ms/step - loss: 1.5516 - accuracy: 0.4329 - val_loss: 1.4234 - val_accuracy: 0.4886Epoch 3/501563/1563 [==============================] - 17s 11ms/step - loss: 1.4365 - accuracy: 0.4789 - val_loss: 1.3575 - val_accuracy: 0.5111Epoch 4/501563/1563 [==============================] - 14s 9ms/step - loss: 1.3624 - accuracy: 0.5098 - val_loss: 1.2803 - val_accuracy: 0.5471Epoch 5/501563/1563 [==============================] - 13s 8ms/step - loss: 1.3069 - accuracy: 0.5322 - val_loss: 1.2305 - val_accuracy: 0.5663Epoch 6/501563/1563 [==============================] - 12s 8ms/step - loss: 1.2687 - accuracy: 0.5471 - val_loss: 1.1839 - val_accuracy: 0.5796Epoch 7/501563/1563 [==============================] - 13s 8ms/step - loss: 1.2243 - accuracy: 0.5668 - val_loss: 1.1430 - val_accuracy: 0.5940Epoch 8/501563/1563 [==============================] - 13s 8ms/step - loss: 1.1891 - accuracy: 0.5800 - val_loss: 1.1261 - val_accuracy: 0.6061Epoch 9/501563/1563 [==============================] - 13s 8ms/step - loss: 1.1568 - accuracy: 0.5916 - val_loss: 1.0998 - val_accuracy: 0.6157Epoch 10/501563/1563 [==============================] - 13s 8ms/step - loss: 1.1219 - accuracy: 0.6053 - val_loss: 1.0769 - val_accuracy: 0.6210Epoch 11/501563/1563 [==============================] - 12s 8ms/step - loss: 1.0993 - accuracy: 0.6148 - val_loss: 1.0369 - val_accuracy: 0.6335Epoch 12/501563/1563 [==============================] - 13s 8ms/step - loss: 1.0709 - accuracy: 0.6232 - val_loss: 1.0119 - val_accuracy: 0.6463Epoch 13/501563/1563 [==============================] - 13s 8ms/step - loss: 1.0473 - accuracy: 0.6302 - val_loss: 0.9964 - val_accuracy: 0.6516Epoch 14/501563/1563 [==============================] - 13s 8ms/step - loss: 1.0252 - accuracy: 0.6419 - val_loss: 0.9782 - val_accuracy: 0.6587Epoch 15/501563/1563 [==============================] - 12s 8ms/step - loss: 1.0035 - accuracy: 0.6469 - val_loss: 0.9569 - val_accuracy: 0.6644Epoch 16/501563/1563 [==============================] - 13s 8ms/step - loss: 0.9836 - accuracy: 0.6572 - val_loss: 0.9586 - val_accuracy: 0.6633Epoch 17/501563/1563 [==============================] - 13s 8ms/step - loss: 0.9656 - accuracy: 0.6614 - val_loss: 0.9192 - val_accuracy: 0.6790Epoch 18/501563/1563 [==============================] - 12s 8ms/step - loss: 0.9506 - accuracy: 0.6679 - val_loss: 0.9133 - val_accuracy: 0.6781Epoch 19/501563/1563 [==============================] - 13s 8ms/step - loss: 0.9273 - accuracy: 0.6756 - val_loss: 0.9046 - val_accuracy: 0.6824Epoch 20/501563/1563 [==============================] - 13s 8ms/step - loss: 0.9129 - accuracy: 0.6795 - val_loss: 0.8855 - val_accuracy: 0.6910Epoch 21/501563/1563 [==============================] - 14s 9ms/step - loss: 0.8924 - accuracy: 0.6873 - val_loss: 0.8886 - val_accuracy: 0.6927Epoch 22/501563/1563 [==============================] - 16s 10ms/step - loss: 0.8840 - accuracy: 0.6905 - val_loss: 0.8625 - val_accuracy: 0.7013Epoch 23/501563/1563 [==============================] - 15s 9ms/step - loss: 0.8655 - accuracy: 0.6980 - val_loss: 0.8738 - val_accuracy: 0.6950Epoch 24/501563/1563 [==============================] - 13s 8ms/step - loss: 0.8543 - accuracy: 0.7019 - val_loss: 0.8454 - val_accuracy: 0.7064Epoch 25/501563/1563 [==============================] - 13s 8ms/step - loss: 0.8388 - accuracy: 0.7056 - val_loss: 0.8354 - val_accuracy: 0.7063Epoch 26/501563/1563 [==============================] - 13s 8ms/step - loss: 0.8321 - accuracy: 0.7115 - val_loss: 0.8244 - val_accuracy: 0.7161Epoch 27/501563/1563 [==============================] - 12s 8ms/step - loss: 0.8169 - accuracy: 0.7163 - val_loss: 0.8390 - val_accuracy: 0.7084Epoch 28/501563/1563 [==============================] - 13s 8ms/step - loss: 0.8071 - accuracy: 0.7190 - val_loss: 0.8372 - val_accuracy: 0.7127Epoch 29/501563/1563 [==============================] - 13s 8ms/step - loss: 0.7949 - accuracy: 0.7219 - val_loss: 0.7990 - val_accuracy: 0.7217Epoch 30/501563/1563 [==============================] - 12s 8ms/step - loss: 0.7861 - accuracy: 0.7273 - val_loss: 0.7940 - val_accuracy: 0.7281Epoch 31/501563/1563 [==============================] - 13s 8ms/step - loss: 0.7750 - accuracy: 0.7299 - val_loss: 0.7933 - val_accuracy: 0.7262Epoch 32/501563/1563 [==============================] - 12s 8ms/step - loss: 0.7635 - accuracy: 0.7373 - val_loss: 0.7964 - val_accuracy: 0.7254Epoch 33/501563/1563 [==============================] - 12s 8ms/step - loss: 0.7537 - accuracy: 0.7361 - val_loss: 0.7891 - val_accuracy: 0.7259Epoch 34/501563/1563 [==============================] - 13s 8ms/step - loss: 0.7460 - accuracy: 0.7410 - val_loss: 0.7893 - val_accuracy: 0.7257Epoch 35/501563/1563 [==============================] - 12s 8ms/step - loss: 0.7366 - accuracy: 0.7448 - val_loss: 0.7713 - val_accuracy: 0.7332Epoch 36/501563/1563 [==============================] - 12s 8ms/step - loss: 0.7275 - accuracy: 0.7492 - val_loss: 0.8443 - val_accuracy: 0.7095Epoch 37/501563/1563 [==============================] - 13s 8ms/step - loss: 0.7257 - accuracy: 0.7478 - val_loss: 0.7583 - val_accuracy: 0.7365Epoch 38/501563/1563 [==============================] - 13s 8ms/step - loss: 0.7097 - accuracy: 0.7535 - val_loss: 0.7497 - val_accuracy: 0.7458Epoch 39/501563/1563 [==============================] - 12s 8ms/step - loss: 0.7091 - accuracy: 0.7554 - val_loss: 0.7588 - val_accuracy: 0.7370Epoch 40/501563/1563 [==============================] - 13s 8ms/step - loss: 0.6945 - accuracy: 0.7576 - val_loss: 0.7583 - val_accuracy: 0.7411Epoch 41/501563/1563 [==============================] - 13s 8ms/step - loss: 0.6888 - accuracy: 0.7592 - val_loss: 0.7481 - val_accuracy: 0.7408Epoch 42/501563/1563 [==============================] - 12s 8ms/step - loss: 0.6829 - accuracy: 0.7634 - val_loss: 0.7372 - val_accuracy: 0.7456Epoch 43/501563/1563 [==============================] - 13s 8ms/step - loss: 0.6742 - accuracy: 0.7665 - val_loss: 0.7324 - val_accuracy: 0.7475Epoch 44/501563/1563 [==============================] - 12s 8ms/step - loss: 0.6646 - accuracy: 0.7679 - val_loss: 0.7444 - val_accuracy: 0.7425Epoch 45/501563/1563 [==============================] - 12s 8ms/step - loss: 0.6613 - accuracy: 0.7686 - val_loss: 0.7294 - val_accuracy: 0.7506Epoch 46/501563/1563 [==============================] - 13s 8ms/step - loss: 0.6499 - accuracy: 0.7712 - val_loss: 0.7335 - val_accuracy: 0.7470Epoch 47/501563/1563 [==============================] - 12s 8ms/step - loss: 0.6446 - accuracy: 0.7759 - val_loss: 0.7223 - val_accuracy: 0.7544Epoch 48/501563/1563 [==============================] - 12s 8ms/step - loss: 0.6376 - accuracy: 0.7793 - val_loss: 0.7259 - val_accuracy: 0.7496Epoch 49/501563/1563 [==============================] - 13s 8ms/step - loss: 0.6341 - accuracy: 0.7803 - val_loss: 0.7705 - val_accuracy: 0.7355Epoch 50/501563/1563 [==============================] - 13s 8ms/step - loss: 0.6234 - accuracy: 0.7820 - val_loss: 0.7116 - val_accuracy: 0.7562

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注