我试图使用来自可教机器应用程序的谷歌模型https://teachablemachine.withgoogle.com/,在输出层之前添加几个层。当我重新训练模型时,总是返回以下错误:
ValueError: 层dense_25的输入0与该层不兼容:预期输入形状的轴-1的值为5,但接收到的输入形状为[20, 512]
我的方法如下:
当重新训练模型时,它返回错误:
如果我不添加新层而重新训练模型,一切正常。谁能建议一下问题出在哪里?
回答:
更新后的回答
如果你想在预训练模型的两层之间添加层,这并不像使用add方法添加层那么简单。如果这样做会导致意外的行为
错误分析:
如果你像下面这样编译模型(就像你指定的那样):
model.layers[-1].add(Dense(512, activation ="relu"))model.add(Dense(128, activation="relu"))model.add(Dense(32))model.add(Dense(5))
模型摘要的输出:
Model: "sequential_12"_________________________________________________________________Layer (type) Output Shape Param # =================================================================sequential_9 (Sequential) (None, 1280) 410208 _________________________________________________________________sequential_11 (Sequential) (None, 512) 131672 _________________________________________________________________dense_12 (Dense) (None, 128) 768 _________________________________________________________________dense_13 (Dense) (None, 32) 4128 _________________________________________________________________dense_14 (Dense) (None, 5) 165 =================================================================Total params: 546,941Trainable params: 532,861Non-trainable params: 14,080_________________________________________________________________
这里看起来一切正常,但仔细一看:
for l in model.layers: print("layer : ", l.name, ", expects input of shape : ",l.input_shape)
输出:
layer : sequential_9 , expects input of shape : (None, 224, 224, 3)layer : sequential_11 , expects input of shape : (None, 1280)layer : dense_12 , expects input of shape : (None, 5) <-- **PROBLEM**layer : dense_13 , expects input of shape : (None, 128)layer : dense_14 , expects input of shape : (None, 32)
问题在于dense_12期望输入形状为(None, 5),但它应该期望输入形状为(None, 512),因为我们已经向sequential_11添加了Dense(512),可能的原因是像上面指定的那样添加层可能不会更新某些属性,例如sequential_11的输出形状,因此在前向传递期间sequential_11的输出与dense_12层(在你的情况下为dense_25)的输入之间存在不匹配
可能的解决方法是:
对于你的问题“在sequential_9和sequential_11之间添加层”,你可以在sequential_9和sequential_11之间添加任意多的层,但始终确保最后添加的层的输出形状应与sequential_11期望的输入形状匹配。在这种情况下,它是1280。
代码:
sequential_1 = model.layers[0] # 重用预训练模型sequential_2 = model.layers[1]from tensorflow.keras.layers import Inputfrom tensorflow.keras.layers import Densefrom tensorflow.keras.models import Modelinp_sequential_1 = Input(sequential_1.layers[0].input_shape[1:])out_sequential_1 = sequential_1(inp_sequential_1)#在sequential_9和sequential_11之间添加层out_intermediate = Dense(512, activation="relu")(out_sequential_1)out_intermediate = Dense(128, activation ="relu")(out_intermediate)out_intermediate = Dense(32, activation ="relu")(out_intermediate)# 始终确保包含一个输出形状与sequential 11的输入形状匹配的层,在这种情况下为1280out_intermediate = Dense(1280, activation ="relu")(out_intermediate)output = sequential_2(out_intermediate) # 中间层的输出被传递给sequential_11 final_model = Model(inputs=inp_sequential_1, outputs=output)
模型摘要的输出:
Model: "functional_3"_________________________________________________________________Layer (type) Output Shape Param # =================================================================input_5 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________sequential_9 (Sequential) (None, 1280) 410208 _________________________________________________________________dense_15 (Dense) (None, 512) 655872 _________________________________________________________________dense_16 (Dense) (None, 128) 65664 _________________________________________________________________dense_17 (Dense) (None, 32) 4128 _________________________________________________________________dense_18 (Dense) (None, 1280) 42240 _________________________________________________________________sequential_11 (Sequential) (None, 5) 128600 =================================================================Total params: 1,306,712Trainable params: 1,292,632Non-trainable params: 14,080