Keras深度变分自编码器

我试图通过添加一个额外的层来将Keras的VAE示例调整为深度网络。

原始代码: 原始VAE代码

更改:

batch_size = 200original_dim = 784latent_dim = 2intermediate_dim_deep = 384 # <<<<<<<intermediate_dim = 256nb_epoch = 20#x = Input(batch_shape=(batch_size, original_dim))x = Dense(intermediate_dim_deep, activation='relu')(x) # 新增层 <<<<<<h = Dense(intermediate_dim, activation='relu')(x)z_mean = Dense(latent_dim)(h)z_log_var = Dense(latent_dim)(h)#def sampling(args):    z_mean, z_log_var = args    epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.)    return z_mean + K.exp(z_log_var / 2) * epsilon# note that "output_shape" isn't necessary with the TensorFlow backendz = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])## we instantiate these layers separately so as to reuse them laterdecoder_h = Dense(intermediate_dim, activation='relu')decoder_d = Dense(intermediate_dim_deep, activation='rely') # 新增层 <<<<<<decoder_mean = Dense(original_dim, activation='sigmoid')h_decoded = decoder_h(z)d_decoded = decoder_d(h_decoded) # 这里增加了一步 <<<<<<<x_decoded_mean = decoder_mean(d_decoded)#def vae_loss(x, x_decoded_mean):    xent_loss = original_dim * objectives.binary_crossentropy(x, x_decoded_mean)    kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)    return xent_loss + kl_loss#vae = Model(x, x_decoded_mean)vae.compile(optimizer='rmsprop', loss=vae_loss)#####

编译时我遇到了这个错误:

/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py:1615: UserWarning: Model inputs must come from a Keras Input layer, they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "model_1" was not an Input tensor, it was generated by layer dense_1.Note that input tensors are instantiated via `tensor = Input(shape)`.The tensor that caused the issue was: None  str(x.name))---------------------------------------------------------------------------Exception                                 Traceback (most recent call last)<ipython-input-8-c9010948cdee> in <module>()----> 1 vae = Model(x, x_decoded_mean)      2 vae.compile(optimizer='rmsprop', loss=vae_loss)/usr/local/lib/python2.7/dist-packages/keras/engine/topology.pyc in __init__(self, input, output, name)   1788                                 'The following previous layers '   1789                                 'were accessed without issue: ' +-> 1790                                 str(layers_with_complete_input))   1791                     for x in node.output_tensors:   1792                         computable_tensors.append(x)Exception: Graph disconnected: cannot obtain value for tensor input_1 at layer "input_1". The following previous layers were accessed without issue: []

我在仓库中看到了其他示例,似乎这是一种有效的方法。我遗漏了什么吗?


回答:

在添加新的隐藏层时,你覆盖了x变量,因此没有输入层。此外,’rely’是一个有效的激活选项吗?

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注