我在学习Keras中的深度学习时遇到了一个问题。损失值没有降低,而且非常高,大约是650。
我正在使用来自tensorflow.keras.datasets.mnist
的MNIST数据集。没有错误发生,只是我的神经网络没有在学习。
这是我的模型:
from tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Dense, Flattenimport tensorflow.nn as tfnninputdim = 28 * 28model = Sequential()model.add(Flatten())model.add(Dense(inputdim, activation = tfnn.relu))model.add(Dense(128, activation = tfnn.relu))model.add(Dense(10, activation = tfnn.softmax))model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])model.fit(X_train, Y_train, epochs = 4)
这是我的输出:
Epoch 1/460000/60000 [==============================] - 32s 527us/sample - loss: 646.0926 - acc: 6.6667e-05Epoch 2/460000/60000 [==============================] - 39s 652us/sample - loss: 646.1003 - acc: 0.0000e+00 - l - ETA: 0s - loss: 646.0983 - acc: 0.0000eEpoch 3/460000/60000 [==============================] - 35s 590us/sample - loss: 646.1003 - acc: 0.0000e+00Epoch 4/460000/60000 [==============================] - 33s 544us/sample - loss: 646.1003 - acc: 0.0000e+00```
回答:
好的,我在代码行之间添加了BatchNormalization
,并将损失函数改为'sparse_categorical_crossentropy'
。这是我修改后的神经网络结构:
model = Sequential()model.add(Flatten())model.add(BatchNormalization(axis = 1, momentum = 0.99))model.add(Dense(inputdim, activation = tfnn.relu))model.add(BatchNormalization(axis = 1, momentum = 0.99))model.add(Dense(128, activation = tfnn.relu))model.add(BatchNormalization(axis = 1, momentum = 0.99))model.add(Dense(10, activation = tfnn.softmax))model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
这是修改后的结果:
Epoch 1/460000/60000 [==============================] - 68s 1ms/sample - loss: 0.2045 - acc: 0.9374Epoch 2/460000/60000 [==============================] - 55s 916us/sample - loss: 0.1007 - acc: 0.9689
谢谢你的帮助!