为什么卷积自编码器中的训练损失和验证损失没有显著下降?训练数据的维度为10496x1024
,而CAE
使用keras
训练32x32
大小的图像块。我已经尝试了l2regularization
,但效果不明显。我训练了20个周期。还有哪些其他可行的方法?
输出如下:
Epoch 1/20 10496/10496 [========] – 52s – loss: 0.4029 – val_loss: 0.3821
Epoch 2/20 10496/10496 [========] – 52s – loss: 0.3825 – val_loss: 0.3784
Epoch 3/20 10496/10496 [=======] – 52s – loss: 0.3802 – val_loss: 0.3772
Epoch 4/20 10496/10496 [=======] – 51s – loss: 0.3789 – val_loss: 0.3757
Epoch 5/20 10496/10496 [=======] – 52s – loss: 0.3778 – val_loss: 0.3752
Epoch 6/20 10496/10496 [=======] – 51s – loss: 0.3770 – val_loss: 0.3743
Epoch 7/20 10496/10496 [=======] – 54s – loss: 0.3763 – val_loss: 0.3744
Epoch 8/20 10496/10496 [=======] – 51s – loss: 0.3758 – val_loss: 0.3735
Epoch 9/20 10496/10496 [=======] – 51s – loss: 0.3754 – val_loss: 0.3731
Epoch 10/20 10496/10496 [=======] – 51s – loss: 0.3748 – val_loss: 0.3739
Epoch 11/20 10496/10496 [=======] – 51s – loss: 0.3745 – val_loss: 0.3729
Epoch 12/20 10496/10496 [=======] – 54s – loss: 0.3741 – val_loss: 0.3723
Epoch 13/20 10496/10496 [=======] – 51s – loss: 0.3736 – val_loss: 0.3718
Epoch 14/20 10496/10496 [=======] – 52s – loss: 0.3733 – val_loss: 0.3716
Epoch 15/20 10496/10496 [=======] – 52s – loss: 0.3731 – val_loss: 0.3717
Epoch 16/20 10496/10496 [=======] – 51s – loss: 0.3728 – val_loss: 0.3712
Epoch 17/20 10496/10496 [=======] – 49s – loss: 0.3725 – val_loss: 0.3709
Epoch 18/20 10496/10496 [=======] – 36s – loss: 0.3723 – val_loss: 0.3710
Epoch 19/20 10496/10496 [=======] – 37s – loss: 0.3721 – val_loss: 0.3708
Epoch 20/20 10496/10496 ========] – 37s – loss: 0.3720 – val_loss: 0.3704
回答:
你的网络仍在学习中,到第20个周期时并没有明显减速。你可以尝试使用更高的学习率,并在有足够数据的情况下使用更多的周期和提前停止方法。这种方法也可以结合正则化方法和k折交叉验证来使用。