我需要使用CNN来识别视网膜疾病。我有1400张图片,每类700张。我的类别是(0 – 无PDR)和(1 – PDR)。我正在尝试构建一个模型来识别输入的视网膜是否处于4级疾病状态。
我对图像进行了如下处理,并将所有图像调整为256×256大小:
ImageCV[index] = cv2.addWeighted(ImageCV[index],4, cv2.GaussianBlur(ImageCV[index],(0,0), 256/30), -4, 128)
处理后的图像如下:https://i.sstatic.net/uydYm.jpg
然后,当我训练模型时,准确率非常高(如99….),但当我尝试预测一些测试图像时,它失败了..例如,我在测试文件夹中放了10个PDR示例,并尝试预测它们(所有都应该是1)..这是结果:
[[0.]][[0.]][[1.]][[0.]][[0.]][[0.]][[1.]][[0.]][[0.]][[0.]]
这是我的模型:
visible = Input(shape=(256,256,3))conv1 = Conv2D(16, kernel_size=(3,3), activation='relu', strides=(1, 1))(visible)conv2 = Conv2D(16, kernel_size=(3,3), activation='relu', strides=(1, 1))(conv1)bat1 = BatchNormalization()(conv2)conv3 = ZeroPadding2D(padding=(1, 1))(bat1)pool1 = MaxPooling2D(pool_size=(2, 2))(conv3)conv4 = Conv2D(32, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.01))(pool1)conv5 = Conv2D(32, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.01))(conv4)bat2 = BatchNormalization()(conv5)pool2 = MaxPooling2D(pool_size=(1, 1))(bat2)conv6 = Conv2D(64, kernel_size=(3,3), activation='relu',strides=(1, 1), padding='valid')(pool2)conv7 = Conv2D(64, kernel_size=(3,3), activation='relu',strides=(1, 1), padding='valid')(conv6)bat3 = BatchNormalization()(conv7)conv7 = ZeroPadding2D(padding=(1, 1))(bat3)pool3 = MaxPooling2D(pool_size=(1, 1))(conv7)conv8 = Conv2D(128, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.01))(pool3)conv9 = Conv2D(128, kernel_size=(2,2), activation='relu', strides=(1, 1), padding='valid')(conv8)bat4 = BatchNormalization()(conv9)pool4 = MaxPooling2D(pool_size=(1, 1))(bat4)flat = Flatten()(pool4)output = Dense(1, activation='sigmoid')(flat)model = Model(inputs=visible, outputs=output)opt = optimizers.adam(lr=0.001, decay=0.0)model.compile(optimizer= opt, loss='binary_crossentropy', metrics=['accuracy'])data, labels = ReadImages(TRAIN_DIR)test, lt = ReadImages(TEST_DIR)data = np.array(data)labels = np.array(labels)test = np.array(test)lt = np.array(lt)np.random.permutation(len(data))np.random.permutation(len(labels))np.random.permutation(len(test))np.random.permutation(len(lt))model.fit(data, labels, epochs=7, validation_data = (test,lt))model.save('model.h5')
这是我的predict.py文件
model = load_model('model.h5')for filename in os.listdir(r'v/'): if filename.endswith(".jpg") or filename.endswith(".ppm") or filename.endswith(".jpeg"): ImageCV = cv2.resize(cv2.imread(os.path.join(TEST_DIR) + filename), (256,256)) ImageCV = cv2.addWeighted(ImageCV,4, cv2.GaussianBlur(ImageCV,(0,0), 256/30), -4, 128) cv2.imshow('image', ImageCV) cv2.waitKey(0) cv2.destroyAllWindows() ImageCV = ImageCV.reshape(-1,256,256,3) print(model.predict(ImageCV))
我该如何做才能全面改善我的预测结果?我非常感谢您的帮助
更新 好吧,我尝试了所有回答中提到的方法,但仍然不起作用…这是我现在的代码:
visible = Input(shape=(256,256,3))conv1 = Conv2D(16, kernel_size=(3,3), activation='relu', strides=(1, 1))(visible)conv2 = Conv2D(32, kernel_size=(3,3), activation='relu', strides=(1, 1))(conv1)bat1 = BatchNormalization()(conv2)conv3 = ZeroPadding2D(padding=(1, 1))(bat1)pool1 = MaxPooling2D(pool_size=(2, 2))(conv3)drop1 = Dropout(0.30)(pool1)conv4 = Conv2D(32, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.01))(drop1)conv5 = Conv2D(64, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.01))(conv4)bat2 = BatchNormalization()(conv5)pool2 = MaxPooling2D(pool_size=(1, 1))(bat2)drop1 = Dropout(0.30)(pool2)conv6 = Conv2D(128, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.01))(pool2)conv7 = Conv2D(128, kernel_size=(2,2), activation='relu', strides=(1, 1), padding='valid')(conv6)bat3 = BatchNormalization()(conv7)pool3 = MaxPooling2D(pool_size=(1, 1))(bat3)drop1 = Dropout(0.30)(pool3)flat = Flatten()(pool3)drop4 = Dropout(0.50)(flat)output = Dense(1, activation='sigmoid')(drop4)model = Model(inputs=visible, outputs=output)opt = optimizers.adam(lr=0.001, decay=0.0)model.compile(optimizer= opt, loss='binary_crossentropy', metrics=['accuracy'])data, labels = ReadImages(TRAIN_DIR)test, lt = ReadImages(TEST_DIR)data = np.array(data)labels = np.array(labels)perm = np.random.permutation(len(data))data = data[perm]labels = labels[perm]#model.fit(data, labels, epochs=8, validation_data = (np.array(test), np.array(lt)))aug = ImageDataGenerator(rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.15, horizontal_flip=True)# train the networkmodel.fit_generator(aug.flow(data, labels, batch_size=32), validation_data=(np.array(test), np.array(lt)), steps_per_epoch=len(data) // 32, epochs=7)
这是返回结果:
Epoch 1/743/43 [==============================] - 1004s 23s/step - loss: 1.8090 - acc: 0.9724 - val_loss: 1.7871 - val_acc: 0.9861Epoch 2/743/43 [==============================] - 1003s 23s/step - loss: 1.8449 - acc: 0.9801 - val_loss: 1.4828 - val_acc: 1.0000Epoch 3/743/43 [==============================] - 1092s 25s/step - loss: 1.5704 - acc: 0.9920 - val_loss: 1.3985 - val_acc: 1.0000Epoch 4/743/43 [==============================] - 1062s 25s/step - loss: 1.5219 - acc: 0.9898 - val_loss: 1.3167 - val_acc: 1.0000Epoch 5/743/43 [==============================] - 990s 23s/step - loss: 2.5744 - acc: 0.9222 - val_loss: 2.9347 - val_acc: 0.9028Epoch 6/743/43 [==============================] - 983s 23s/step - loss: 1.6053 - acc: 0.9840 - val_loss: 1.3299 - val_acc: 1.0000Epoch 7/743/43 [==============================] - 974s 23s/step - loss: 1.6180 - acc: 0.9801 - val_loss: 1.5181 - val_acc: 0.9861
我已经添加了dropout,减少了模型层,应用了数据增强,但仍然不起作用(所有预测结果都是0)…
请问有人可以帮助解决这个问题吗?
回答: