为什么在co-lab中flatten()不起作用,而在其他用户发布的kaggle-notebook中却有效?

我正在进行一个肺炎检测项目。我在kaggle上查看了相关的notebook。有一个用户将两个预训练模型densenet169和mobilenet堆叠在一起。我从该用户那里复制了整个kaggle notebook,他没有遇到任何错误,但在我在google colab上运行时,在这部分出现了错误:

出错的部分:

    from keras.layers.merge import concatenate    from keras.layers import Input    import tensorflow as tf    input_shape = (224,224,3)    input_layer = Input(shape = (224, 224, 3))        #first model    base_mobilenet = MobileNetV2(weights = 'imagenet', include_top = False, input_shape = input_shape)    base_densenet = DenseNet169(weights = 'imagenet', include_top = False, input_shape = input_shape)        for layer in base_mobilenet.layers:        layer.trainable =  False    for layer in base_densenet.layers:        layer.trainable = False            model_mobilenet = base_mobilenet(input_layer)    model_mobilenet = GlobalAveragePooling2D()(model_mobilenet)    output_mobilenet = Flatten()(model_mobilenet)        model_densenet = base_densenet(input_layer)    model_densenet = GlobalAveragePooling2D()(model_densenet)    output_densenet = Flatten()(model_densenet)        merged = tf.keras.layers.Concatenate()([output_mobilenet, output_densenet])         x = BatchNormalization()(merged)    x = Dense(256,activation = 'relu')(x)    x = Dropout(0.5)(x)    x = BatchNormalization()(x)    x = Dense(128,activation = 'relu')(x)    x = Dropout(0.5)(x)    x = Dense(1, activation = 'sigmoid')(x)    stacked_model = tf.keras.models.Model(inputs = input_layer, outputs = x)

错误追溯:

---------------------------------------------------------------------------ValueError                                Traceback (most recent call last)<ipython-input-35-69c389bc7252> in <module>()     18 model_mobilenet = base_mobilenet(input_layer)     19 model_mobilenet = GlobalAveragePooling2D()(model_mobilenet)---> 20 output_mobilenet = Flatten(data_format=None)(model_mobilenet)     21      22 model_densenet = base_densenet(input_layer)5 frames/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)   1028         with autocast_variable.enable_auto_cast_variables(   1029             self._compute_dtype_object):-> 1030           outputs = call_fn(inputs, *args, **kwargs)   1031    1032         if self._activity_regularizer:/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/layers/core.py in call(self, inputs)    672       # Full static shape is guaranteed to be available.    673       # Performance: Using `constant_op` is much faster than passing a list.--> 674       flattened_shape = constant_op.constant([inputs.shape[0], -1])    675       return array_ops.reshape(inputs, flattened_shape)    676     else:/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py in constant(value, dtype, shape, name)    263   """    264   return _constant_impl(value, dtype, shape, name, verify_shape=False,--> 265                         allow_broadcast=True)    266     267 /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)    274       with trace.Trace("tf.constant"):    275         return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)--> 276     return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)    277     278   g = ops.get_default_graph()/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py in _constant_eager_impl(ctx, value, dtype, shape, verify_shape)    299 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):    300   """Implementation of eager constant."""--> 301   t = convert_to_eager_tensor(value, ctx, dtype)    302   if shape is None:    303     return t/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)     96       dtype = dtypes.as_dtype(dtype).as_datatype_enum     97   ctx.ensure_initialized()---> 98   return ops.EagerTensor(value, ctx.device_name, dtype)     99     100 ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.

回答:

你的导入有些混乱。

这是修复后的代码版本

from tensorflow.keras.layers import concatenatefrom tensorflow.keras.layers import Input, GlobalAveragePooling2D, Flatten, BatchNormalization, Dense, Dropoutfrom tensorflow.keras.applications import MobileNetV2, DenseNet169import tensorflow as tfinput_shape = (224,224,3)input_layer = Input(shape = (224, 224, 3))#first modelbase_mobilenet = MobileNetV2(weights = 'imagenet', include_top = False, input_shape = input_shape)base_densenet = DenseNet169(weights = 'imagenet', include_top = False, input_shape = input_shape)for layer in base_mobilenet.layers:    layer.trainable =  Falsefor layer in base_densenet.layers:    layer.trainable = Falsemodel_mobilenet = base_mobilenet(input_layer)model_mobilenet = GlobalAveragePooling2D()(model_mobilenet)output_mobilenet = Flatten()(model_mobilenet)model_densenet = base_densenet(input_layer)model_densenet = GlobalAveragePooling2D()(model_densenet)output_densenet = Flatten()(model_densenet)merged = tf.keras.layers.Concatenate()([output_mobilenet, output_densenet]) x = BatchNormalization()(merged)x = Dense(256,activation = 'relu')(x)x = Dropout(0.5)(x)x = BatchNormalization()(x)x = Dense(128,activation = 'relu')(x)x = Dropout(0.5)(x)x = Dense(1, activation = 'sigmoid')(x)stacked_model = tf.keras.models.Model(inputs = input_layer, outputs = x)

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注