Tensorflow 2.0 输入创建时第一个形状元素为 None

我试图以以下方式创建一个输入:

    Tx = 318    n_freq = 101    input_anchor = Input(shape=(n_freq,Tx), name='input_anchor')

当我运行:

    input_anchor.shape

我得到:

    TensorShape([None, 101, 318])

后来当我尝试在模型中使用该输入时,我得到了以下错误:

    TypeError: Cannot iterate over a tensor with unknown first dimension.

在TensorFlow的opy.py中,我找到了这个代码块,很可能就是我的代码出错的地方:

     def __iter__(self):        if not context.executing_eagerly():          raise TypeError(              "Tensor objects are only iterable when eager execution is "              "enabled. To iterate over this tensor use tf.map_fn.")        shape = self._shape_tuple()        if shape is None:          raise TypeError("Cannot iterate over a tensor with unknown shape.")        if not shape:          raise TypeError("Cannot iterate over a scalar tensor.")        if shape[0] is None:          raise TypeError(              "Cannot iterate over a tensor with unknown first dimension.")        for i in xrange(shape[0]):          yield self[i]

如果你想查看我的整个模型实现,这里是:

    def base_model(input_shape):        X_input = Input(shape = input_shape)        # Step 1: CONV layer (≈4 lines)        X = Conv1D(196,kernel_size = 15, strides = 4)(X_input)                                 # CONV1D        X = BatchNormalization()(X)                                 # Batch normalization        X = Activation('relu')(X)                                 # ReLu activation        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        # Step 2: First GRU Layer (≈4 lines)        X = LSTM(units = 128, return_sequences = True)(X_input)                                 # GRU (use 128 units and return the sequences)        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        X = BatchNormalization()(X)                                 # Batch normalization        # Step 3: Second GRU Layer (≈4 lines)        X = LSTM(units = 128, return_sequences = True)(X)                                 # GRU (use 128 units and return the sequences)        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        X = BatchNormalization()(X)                                 # Batch normalization        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        # Step 4: Third GRU Layer (≈4 lines)        X = LSTM(units = 128)(X)                                 # GRU (use 128 units and return the sequences)        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        X = BatchNormalization()(X)                                 # Batch normalization        X = Dropout(rate = 0.2)(X)                                 # dropout (use 0.8)        X = Dense(64)(X)        base_model = Model(inputs = X_input, outputs = X)        return base_model      def speech_model(input_shape, base_model):        #get triplets vectors        input_anchor = Input(shape=input_shape, name='input_anchor')        input_positive = Input(shape=input_shape, name='input_positive')        input_negative = Input(shape=input_shape, name='input_negative')        vec_anchor = base_model(input_anchor)        vec_positive = base_model(input_positive)        vec_negative = base_model(input_negative)        #Concatenate vectors vec_positive, vec_negative        concat_layer = concatenate([vec_anchor,vec_positive,vec_negative], axis = -1, name='concat_layer')        model = Model(inputs = [input_anchor,input_positive,input_negative], outputs = concat_layer, name = 'speech_to_vec')        #model = Model(inputs = [input_anchor,input_positive,input_negative], outputs = [vec_anchor,vec_positive,vec_negative], name = 'speech_to_vec')        #model = Model(inputs = [input_anchor,input_positiv], outputs=vec_anchor)        return model  

以及导致错误的那一行:

    speech_model = speech_model(input_shape = (n_freq, Tx), base_model = base_model)

非常感谢您的阅读,任何帮助解决这个问题的建议都将不胜感激。


回答:

您的base_model(input_shape)函数需要传递tuple,但您传递的是Input Layer

# 修改vec_anchor = base_model(input_anchor)vec_positive = base_model(input_positive)vec_negative = base_model(input_negative)# 为vec_anchor = base_model(input_shape)vec_positive = base_model(input_shape)vec_negative = base_model(input_shape)

此外,您需要修正最终模型的输入和输出,因为concatenate不能连接多个模型类型。

concat_layer = concatenate([vec_anchor.output,vec_positive.output,vec_negative.output], axis = -1, name='concat_layer')model = Model(inputs = [vec_anchor.input,vec_positive.input,vec_negative.input], outputs = concat_layer, name = 'speech_to_vec')

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注