Tensorflow 2.0 输入创建时第一个形状元素为 None

我试图以以下方式创建一个输入:

    Tx = 318    n_freq = 101    input_anchor = Input(shape=(n_freq,Tx), name='input_anchor')

当我运行:

    input_anchor.shape

我得到:

    TensorShape([None, 101, 318])

后来当我尝试在模型中使用该输入时,我得到了以下错误:

    TypeError: Cannot iterate over a tensor with unknown first dimension.

在TensorFlow的opy.py中,我找到了这个代码块,很可能就是我的代码出错的地方:

     def __iter__(self):        if not context.executing_eagerly():          raise TypeError(              "Tensor objects are only iterable when eager execution is "              "enabled. To iterate over this tensor use tf.map_fn.")        shape = self._shape_tuple()        if shape is None:          raise TypeError("Cannot iterate over a tensor with unknown shape.")        if not shape:          raise TypeError("Cannot iterate over a scalar tensor.")        if shape[0] is None:          raise TypeError(              "Cannot iterate over a tensor with unknown first dimension.")        for i in xrange(shape[0]):          yield self[i]

如果你想查看我的整个模型实现,这里是:

    def base_model(input_shape):        X_input = Input(shape = input_shape)        # Step 1: CONV layer (≈4 lines)        X = Conv1D(196,kernel_size = 15, strides = 4)(X_input)                                 # CONV1D        X = BatchNormalization()(X)                                 # Batch normalization        X = Activation('relu')(X)                                 # ReLu activation        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        # Step 2: First GRU Layer (≈4 lines)        X = LSTM(units = 128, return_sequences = True)(X_input)                                 # GRU (use 128 units and return the sequences)        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        X = BatchNormalization()(X)                                 # Batch normalization        # Step 3: Second GRU Layer (≈4 lines)        X = LSTM(units = 128, return_sequences = True)(X)                                 # GRU (use 128 units and return the sequences)        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        X = BatchNormalization()(X)                                 # Batch normalization        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        # Step 4: Third GRU Layer (≈4 lines)        X = LSTM(units = 128)(X)                                 # GRU (use 128 units and return the sequences)        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        X = BatchNormalization()(X)                                 # Batch normalization        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        X = Dense(64)(X)        base_model = Model(inputs = X_input, outputs = X)        return base_model      def speech_model(input_shape, base_model):        #get triplets vectors        input_anchor = Input(shape=input_shape, name='input_anchor')        input_positive = Input(shape=input_shape, name='input_positive')        input_negative = Input(shape=input_shape, name='input_negative')        vec_anchor = base_model(input_anchor)        vec_positive = base_model(input_positive)        vec_negative = base_model(input_negative)        #Concatenate vectors vec_positive, vec_negative        concat_layer = concatenate([vec_anchor,vec_positive,vec_negative], axis = -1, name='concat_layer')        model = Model(inputs = [input_anchor,input_positive,input_negative], outputs = concat_layer, name = 'speech_to_vec')        #model = Model(inputs = [input_anchor,input_positive,input_negative], outputs = [vec_anchor,vec_positive,vec_negative], name = 'speech_to_vec')        #model = Model(inputs = [input_anchor,input_positiv], outputs=vec_anchor)        return model  

以及导致错误的那一行:

    speech_model = speech_model(input_shape = (n_freq, Tx), base_model = base_model)

非常感谢您的阅读,任何帮助解决这个问题的建议都将不胜感激。


回答:

您的base_model(input_shape)函数需要传递tuple,但您传递的是Input Layer

# 修改vec_anchor = base_model(input_anchor)vec_positive = base_model(input_positive)vec_negative = base_model(input_negative)# 为vec_anchor = base_model(input_shape)vec_positive = base_model(input_shape)vec_negative = base_model(input_shape)

此外,您需要修正最终模型的输入和输出,因为concatenate不能连接多个模型类型。

concat_layer = concatenate([vec_anchor.output,vec_positive.output,vec_negative.output], axis = -1, name='concat_layer')model = Model(inputs = [vec_anchor.input,vec_positive.input,vec_negative.input], outputs = concat_layer, name = 'speech_to_vec')

Related Posts

在使用k近邻算法时,有没有办法获取被使用的“邻居”?

我想找到一种方法来确定在我的knn算法中实际使用了哪些…

Theano在Google Colab上无法启用GPU支持

我在尝试使用Theano库训练一个模型。由于我的电脑内…

准确性评分似乎有误

这里是代码: from sklearn.metrics…

Keras Functional API: “错误检查输入时:期望input_1具有4个维度,但得到形状为(X, Y)的数组”

我在尝试使用Keras的fit_generator来训…

如何使用sklearn.datasets.make_classification在指定范围内生成合成数据?

我想为分类问题创建合成数据。我使用了sklearn.d…

如何处理预测时不在训练集中的标签

已关闭。 此问题与编程或软件开发无关。目前不接受回答。…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注