我正在尝试使用Keras构建一个针对文本序列的字符级自动编码器。当我编译模型时,出现了关于张量形状的错误,如下所示。我打印出了各层的规格以检查张量形状是否匹配,发现问题可能出在最后一个Lambda层没有正确指定输出张量形状,但我无法找出原因或如何指定它,并且在Keras的文档或Google上都没有找到相关信息。
在错误输出下方还附上了我定义模型的代码部分。如果需要,可以在这里查看完整的脚本:PasteBin。
错误和层输出
(主要注意最后一层。)
0 <keras.engine.topology.InputLayer object at 0x7f5d290eb588> Input shape (None, 80) Output shape (None, 80)1 <keras.layers.core.Lambda object at 0x7f5d35f25a20> Input shape (None, 80) Output shape (None, 80, 99)2 <keras.layers.core.Dense object at 0x7f5d2dda52e8> Input shape (None, 80, 99) Output shape (None, 80, 256)3 <keras.layers.core.Dropout object at 0x7f5d25004da0> Input shape (None, 80, 256) Output shape (None, 80, 256)4 <keras.layers.core.Dense object at 0x7f5d2501ac18> Input shape (None, 80, 256) Output shape (None, 80, 128)5 <keras.layers.core.Dense object at 0x7f5d24dc6cc0> Input shape (None, 80, 128) Output shape (None, 80, 64)6 <keras.layers.core.Dense object at 0x7f5d24de1fd0> Input shape (None, 80, 64) Output shape (None, 80, 128)7 <keras.layers.core.Dropout object at 0x7f5d24df4a20> Input shape (None, 80, 128) Output shape (None, 80, 128)8 <keras.layers.core.Dense object at 0x7f5d24dfeb38> Input shape (None, 80, 128) Output shape (None, 80, 256)9 <keras.layers.core.Lambda object at 0x7f5d24da6a20> Input shape (None, 80, 256) Output shape (None, 80)----------------0 Input Tensor("input_1:0", shape=(?, 80), dtype=int64) Output Tensor("input_1:0", shape=(?, 80), dtype=int64)1 Input Tensor("input_1:0", shape=(?, 80), dtype=int64) Output Tensor("ToFloat:0", shape=(?, 80, 99), dtype=float32)2 Input Tensor("ToFloat:0", shape=(?, 80, 99), dtype=float32) Output Tensor("Relu:0", shape=(?, 80, 256), dtype=float32)3 Input Tensor("Relu:0", shape=(?, 80, 256), dtype=float32) Output Tensor("cond/Merge:0", shape=(?, 80, 256), dtype=float32)4 Input Tensor("cond/Merge:0", shape=(?, 80, 256), dtype=float32) Output Tensor("Relu_1:0", shape=(?, 80, 128), dtype=float32)5 Input Tensor("Relu_1:0", shape=(?, 80, 128), dtype=float32) Output Tensor("Relu_2:0", shape=(?, 80, 64), dtype=float32)6 Input Tensor("Relu_2:0", shape=(?, 80, 64), dtype=float32) Output Tensor("Relu_3:0", shape=(?, 80, 128), dtype=float32)7 Input Tensor("Relu_3:0", shape=(?, 80, 128), dtype=float32) Output Tensor("cond_1/Merge:0", shape=(?, 80, 128), dtype=float32)8 Input Tensor("cond_1/Merge:0", shape=(?, 80, 128), dtype=float32) Output Tensor("truediv:0", shape=(?, 80, 256), dtype=float32)9 Input Tensor("truediv:0", shape=(?, 80, 256), dtype=float32) Output Tensor("ToFloat_1:0", shape=(), dtype=float32)----------------Traceback (most recent call last): File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_shape.py", line 578, in merge_with self.assert_same_rank(other) File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_shape.py", line 624, in assert_same_rank "Shapes %s and %s must have the same rank" % (self, other))ValueError: Shapes (?, ?) and () must have the same rankDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/ops/nn_impl.py", line 153, in sigmoid_cross_entropy_with_logits labels.get_shape().merge_with(logits.get_shape()) File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_shape.py", line 585, in merge_with (self, other))ValueError: Shapes (?, ?) and () are not compatibleDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "lstm.py", line 97, in <module> autoencoder.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 667, in compile sample_weight, mask) File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 318, in weighted score_array = fn(y_true, y_pred) File "/usr/local/lib/python3.4/dist-packages/keras/objectives.py", line 45, in binary_crossentropy return K.mean(K.binary_crossentropy(y_pred, y_true), axis=-1) File "/usr/local/lib/python3.4/dist-packages/keras/backend/tensorflow_backend.py", line 2449, in binary_crossentropy logits=output) File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/ops/nn_impl.py", line 156, in sigmoid_cross_entropy_with_logits % (logits.get_shape(), labels.get_shape()))ValueError: logits and labels must have the same shape (() vs (?, ?))
代码
我使用以下代码构建了我的模型:
def binarize(x, sz): return tf.to_float(tf.one_hot(x, sz, on_value=1, off_value=0, axis=-1))def binarize_outputshape(in_shape): return in_shape[0], in_shape[1], len(chars)def debinarize(x): return tf.to_float(np.argmax(x)) # 获取概率最高的字符def debinarize_outputshape(in_shape): return in_shape[0], in_shape[1]input_sentence = Input(shape=(max_title_len,), dtype='int64')# 将句子转换为独热向量one_hot = Lambda(binarize, output_shape=binarize_outputshape, arguments={'sz': len(chars)})(input_sentence)# 形状:max_title_len * chars = 80 * 55 = 4400encoder = Dense(256, activation='relu')(one_hot)encoder = Dropout(0.1)(encoder)encoder = Dense(128, activation='relu')(encoder)encoder = Dense(64, activation='relu')(encoder)decoder = Dense(128, activation='relu')(encoder)encoder = Dropout(0.1)(encoder)decoder = Dense(256, activation='softmax')(decoder)# 从独热向量转换回decoder = Lambda(debinarize, output_shape=debinarize_outputshape)(decoder)autoencoder = Model(input=input_sentence, output=decoder)
首先,我输入一个最多80个字符的文本序列,Lambda层将每个字符转换为独热向量。最后,我希望将独热向量转换回来,只取最大概率作为解码字符。
问题
- 我是否在模型构建代码中遗漏了什么?
- 如何让Lambda层知道输出张量形状?
编辑
正如Nassim Ben指出的,问题出在debinarize函数上。将其更改为:
def debinarize(x): return tf.to_float(tf.argmax(x, axis=0))
至少为输出张量的形状设置了某种值。尽管该值有点奇怪,因为它是(80, 256),与输出形状(None, 80)不同。所有其他输出张量形状和输出形状都相应匹配(我认为’?’和None大体上是相同的…)。更具体地说,现在的Lambda层看起来像这样:
<keras.layers.core.Lambda object at 0x7fafcc5a59b0> Input shape (None, 80, 256) Output shape (None, 80)......Input Tensor("truediv:0", shape=(?, 80, 256), dtype=float32) Output Tensor("ToFloat_1:0", shape=(80, 256), dtype=float32)
问题是,我希望输出张量形状为(?, 80),就像第一层的输入一样。我只更改了argmax,没有更改其他任何代码。
现在给出的错误是:
Traceback (most recent call last): File "lstm.py", line 122, in <module> callbacks=[earlystop_cb, check_cb, keras.callbacks.TensorBoard(log_dir='/tmp/autoencoder')]) File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 1168, in fit self._make_train_function() File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 760, in _make_train_function self.total_loss) File "/usr/local/lib/python3.4/dist-packages/keras/optimizers.py", line 433, in get_updates m_t = (self.beta_1 * m) + (1. - self.beta_1) * g File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/ops/math_ops.py", line 883, in binary_op_wrapper y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y") File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 651, in convert_to_tensor as_ref=False) File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 716, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/constant_op.py", line 165, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_util.py", line 360, in make_tensor_proto raise ValueError("None values not supported.")ValueError: None values not supported.
回答:
我认为这是因为在张量上使用了numpy函数。尝试使用tf的argmax函数(我想你想要减少的轴是1,不确定)
def debinarize(x): return tf.to_float(tf.argmax(x,axis=2)) # 获取概率最高的字符
这样行吗?