感谢您的时间!
我正在尝试构建一个用于回归预测离散值的神经网络,但有一个特殊的调整。输入将通过两种方式处理(模型A和模型B),然后进行加权组合。输出通过公式AG + B(1-G)组合,其中G = 1/(1+exp(-gamma * (input_weighting – c)))。gamma和c应在训练过程中学习。我在处理变量gamma和c以及减法运算(1-G)时遇到了困难。我当前的代码在两个不同的地方失败了:
# 用于时间序列的两个模型(卷积方法)
input_model_A = keras.Input(shape=(12,))
model_A = Dense(12)(input_model_A)
input_model_B = keras.Input(shape=(12,))
model_B = Dense(24)(input_model_B)
# 用于模型加权的输入
input_weighting = keras.Input(shape=[1,], name="vola_input")
# 指数 = gamma * (input_weighting - c)
class MyLayer(Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape=[[1,1],[1,1]]):
self._c = K.variable(0.5)
self._gamma = K.variable(0.5)
self.trainable_weights = [self._c, self._gamma]
super(MyLayer, self).build(input_shape) # 确保在最后调用这个
def call(self, vola, **kwargs):
intermediate = substract([vola, self._c])
result = multiply([self._gamma, intermediate])
return result
def compute_output_shape(self, input_shape):
return input_shape[0]
exponent = MyLayer()(input_weighting)
# G = 1/(1+exp(-exponent))
G = keras.layers.Dense(1, activation="sigmoid", name="G")(exponent)
# output = G*A + (1-G)*B
weighted_A = keras.layers.Multiply(name="layer_A")([model_A.outputs[0], G])
weighted_B = keras.layers.Multiply(name="layer_B")
pseudoinput = Input(shape=[1, 1], name="pseudoinput_input",
tensor=K.variable([1])) ([model_B.outputs[0], keras.layers.Subtract()([pseudoinput, G])])
merge_layer = keras.layers.Add(name="merge_layer")([weighted_A, weighted_B])
output_layer = keras.layers.Dense(units=1, activation='relu', name="output_layer")(merge_layer)
model = keras.Model(inputs=[input_model_A, input_model_B, input_weighting], outputs=[output_layer])
optimizer = SGD(learning_rate=0.01, momentum=0.0, nesterov=False)
model.compile(optimizer=optimizer, loss='mean_squared_error')
- 我的自定义层有一个我无法理解的错误,似乎与输入维度定义有关的问题。
File "...\keras\layers\merge.py", line 74, in build batch_sizes = [s[0] for s in input_shape if s is not None] File "...\keras\layers\merge.py", line 74, in <listcomp> batch_sizes = [s[0] for s in input_shape if s is not None]IndexError: tuple index out of range
- 我的“1”(在1-G中)就是不工作。我怀疑我尝试实例化常量张量/层的做法有问题。
File "...\keras\backend\tensorflow_backend.py", line 75, in symbolic_fn_wrapper return func(*args, **kwargs) File "...\keras\engine\base_layer.py", line 446, in __call__ self.assert_input_compatibility(inputs) File "...\keras\engine\base_layer.py", line 358, in assert_input_compatibility str(K.ndim(x)))ValueError: Input 0 is incompatible with layer c: expected min_ndim=2, found ndim=1
我找到了这两个建议并尝试过,但没有成功:在Keras中创建常量值如何向Keras提供常量输入
坦白说,我对导致我问题的两个原因都很感兴趣,但我更希望找到一个能实现所描述架构的解决方案。
回答:
这是我的提案并附有一些评论
input_model_A = Input(shape=(12,))model_A = Dense(24)(input_model_A)input_model_B = Input(shape=(12,))model_B = Dense(24)(input_model_B)# 模型A和模型B必须具有相同的最后维度# 否则无法应用下面的Add操作# 用于模型加权的输入input_weighting = Input(shape=(1,), name="vola_input")class MyLayer(Layer): def __init__(self, **kwargs): super(MyLayer, self).__init__(**kwargs) self._c = K.variable(0.5) self._gamma = K.variable(0.5) def call(self, vola, **kwargs): x = self._gamma * (vola - self._c) # gamma * (input_weighting - c) result = tf.nn.sigmoid(x) # 1 / (1 + exp(-x)) return resultG = MyLayer()(input_weighting) # 1/(1+exp(-gamma * (input_weighting - c)))weighted_A = Lambda(lambda x: x[0]*x[1])([model_A,G]) # A*Gweighted_B = Lambda(lambda x: x[0]*(1-x[1]))([model_B,G]) # B*(1-G)merge_layer = Add(name="merge_layer")([weighted_A, weighted_B]) # A*G + B*(1-G)output_layer = Dense(units=1, activation='relu', name="output_layer")(merge_layer)model = Model(inputs=[input_model_A, input_model_B, input_weighting], outputs=[output_layer])model.compile(optimizer='adam', loss='mean_squared_error')# 创建虚拟数据并拟合n_sample = 100Xa = np.random.uniform(0,1, (n_sample,12))Xb = np.random.uniform(0,1, (n_sample,12))W = np.random.uniform(0,1, n_sample)y = np.random.uniform(0,1, n_sample)model.fit([Xa,Xb,W], y, epochs=3)
这里是运行的笔记本:https://colab.research.google.com/drive/1MA6qs4IK9e41TbBK1mAebtALA2fMcNPY?usp=sharing