如何在Keras中将tf.constant添加到输出

我有一个正在运行的模型,使用以下代码构建:

model = tf.keras.Model(inputs=input_layers, outputs=outputs)

如果我尝试向输出添加一个简单的常量,会得到一个错误消息。例如:

output = output + [tf.constant(['label1', 'label2'], dtype = tf.string)]model = tf.keras.Model(inputs=input_layers, outputs=outputs)

错误消息: AttributeError: Tensor.op is meaningless when eager execution is enabled.

有没有办法在训练后或保存模型时将常量添加到模型中?

我的想法是在服务时将常量作为输出的一部分。

包含错误的完整网络示例:

import tensorflow as tfimport tensorflow.keras as kerasinput = keras.layers.Input(shape=(2,))hidden = keras.layers.Dense(10)(input)output = keras.layers.Dense(3, activation='sigmoid')(hidden)model = keras.models.Model(inputs=input, outputs=[output, tf.constant(['out1','out2','out3'], dtype=tf.string)])

错误

in <module>      5 hidden = keras.layers.Dense(10)(input)      6 output = keras.layers.Dense(3, activation='sigmoid')(input)----> 7 model = keras.models.Model(inputs=input, outputs=[output, tf.constant(['out1','out2','out3'], dtype=tf.string)])/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in __init__(self, *args, **kwargs)    144     145   def __init__(self, *args, **kwargs):--> 146     super(Model, self).__init__(*args, **kwargs)    147     _keras_api_gauge.get_cell('model').set(True)    148     # initializing _distribution_strategy here since it is possible to call/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py in __init__(self, *args, **kwargs)    165         'inputs' in kwargs and 'outputs' in kwargs):    166       # Graph network--> 167       self._init_graph_network(*args, **kwargs)    168     else:    169       # Subclassed network/lib/python3.6/site-packages/tensorflow_core/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)    455     self._self_setattr_tracking = False  # pylint: disable=protected-access    456     try:--> 457       result = method(self, *args, **kwargs)    458     finally:    459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py in _init_graph_network(self, inputs, outputs, name, **kwargs)    268     269     if any(not hasattr(tensor, '_keras_history') for tensor in self.outputs):--> 270       base_layer_utils.create_keras_history(self._nested_outputs)    271     272     self._base_init(name=name, **kwargs)/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in create_keras_history(tensors)    182     keras_tensors: The Tensors found that came from a Keras Layer.    183   """--> 184   _, created_layers = _create_keras_history_helper(tensors, set(), [])    185   return created_layers    186 /lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)    208     if getattr(tensor, '_keras_history', None) is not None:    209       continue--> 210     op = tensor.op  # The Op that created this Tensor.    211     if op not in processed_ops:    212       # Recursively set `_keras_history`./lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in op(self)   1078   def op(self):   1079     raise AttributeError(-> 1080         "Tensor.op is meaningless when eager execution is enabled.")   1081    1082   @propertyAttributeError: Tensor.op is meaningless when eager execution is enabled.

使用Python 3.6和TensorFlow 2.0


回答:

将常量放入Lambda层中。Keras会进行一些额外的记录,因此仅使用tf操作是不够的。使用Lambda层可以解决这个问题。

编辑以提供如何操作的示例:您的最后一个示例将转换为以下代码

import tensorflow as tfimport tensorflow.keras as kerasinputs = keras.layers.Input(shape=(2,))hidden = keras.layers.Dense(10)(inputs)output1 = keras.layers.Dense(3, activation='sigmoid')(hidden)@tf.functiondef const(tensor):    batch_size = tf.shape(tensor)[0]    constant = tf.constant(['out1','out2','out3'], dtype=tf.string)    constant = tf.expand_dims(constant, axis=0)    return tf.broadcast_to(constant, shape=(batch_size, 3))output2 = keras.layers.Lambda(const)(inputs)model = keras.models.Model(inputs=inputs, outputs=[output1, output2])

编辑:这让我想起了之前的一个项目,我在Keras模型中使用了很多常量。当时我为此编写了一个层

class ConstantOnBatch(keras.layers.Layer):    def __init__(self, constant, *args, **kwargs):        self._initial_constant = copy.deepcopy(constant)        self.constant = K.constant(constant)        self.out_shape = self.constant.shape.as_list()        self.constant = tf.reshape(self.constant, [1]+self.out_shape)        super().__init__(*args, **kwargs)    def build(self, input_shape):        super().build(input_shape)    def call(self, inputs):        batch_size = tf.shape(inputs)[0]        output_shape = [batch_size]+self.out_shape        return tf.broadcast_to(self.constant, output_shape)    def compute_output_shape(self, input_shape):        input_shape = input_shape.as_list()        return [input_shape[0]]+self.out_shape    def get_config(self):        base_config = super().get_config()        base_config['constant'] = self._initial_constant    @classmethod    def from_config(cls, config):        return cls(**config)

它可能需要更新到tf2,并且代码可以用更好的方式编写,但如果您需要很多常量,这可能会为使用大量Lambda层提供一个更优雅的解决方案的基础。

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注