### InvalidArgumentError: 重塑操作的输入是一个包含27000个值的张量,但请求的形状需要810000 [Op:Reshape]

在为ModelNet10设置3D-GAN时,遇到了以下错误信息:

InvalidArgumentError: 重塑操作的输入是一个包含27000个值的张量,但请求的形状需要810000 [Op:Reshape]

在我看来,批次没有正确创建,因此张量的形状无效。我尝试了不同的方法,但无法正确设置批次。我非常感谢任何关于如何清理我的代码的建议!提前感谢!

import timeimport numpy as npimport tensorflow as tfnp.random.seed(1)from tensorflow.keras import layersfrom IPython import display# 加载数据modelnet_path = '/modelnet10.npz'data = np.load(modelnet_path)X, Y = data['X_train'], data['y_train']X_test, Y_test = data['X_test'], data['y_test']X = X.reshape(X.shape[0], 30, 30, 30, 1).astype('float32')#超参数BUFFER_SIZE = 3991BATCH_SIZE = 30LEARNING_RATE = 4e-4BETA_1 = 5e-1EPOCHS = 100#用于图像生成的随机种子n_examples = 16noise_dim = 100seed = tf.random.normal([n_examples, noise_dim])train_dataset = tf.data.Dataset.from_tensor_slices(X).batch(BATCH_SIZE)# 构建网络def make_discriminator_model():        model = tf.keras.Sequential()    model.add(layers.Reshape((30, 30, 30, 1), input_shape=(30, 30, 30)))      model.add(layers.Conv3D(16, 6, strides=2, activation='relu'))    model.add(layers.Conv3D(64, 5, strides=2, activation='relu'))    model.add(layers.Conv3D(64, 5, strides=2, activation='relu'))    model.add(layers.Flatten())    model.add(layers.Dense(10))    return modeldiscriminator = make_discriminator_model()def make_generator_model():       model = tf.keras.Sequential()    model.add(layers.Dense(15*15*15*128, use_bias=False,input_shape=(100,)))    model.add(layers.BatchNormalization())    model.add(layers.ReLU())    model.add(layers.Reshape((15,15,15,128)))        model.add(layers.Conv3DTranspose(64, (5,5,5), strides=(1,1,1), padding='valid', use_bias=False))    model.add(layers.BatchNormalization())    model.add(layers.ReLU())      model.add(layers.Conv3DTranspose(32, (5,5,5), strides=(2,2,2), padding='valid', use_bias=False, activation='tanh'))    return modelgenerator = make_generator_model()#优化器和损失函数cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)def discriminator_loss(real_output, fake_output):    real_loss = cross_entropy(tf.ones_like(real_output), real_output)    fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)    total_loss = real_loss + fake_loss    return total_lossdef generator_loss(fake_output):    return cross_entropy(tf.ones_like(fake_output), fake_output)optimizer = tf.keras.optimizers.Adam(lr=LEARNING_RATE, beta_1=BETA_1)#训练def train_step(shapes):    noise = tf.random.normal([BATCH_SIZE, noise_dim])    with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:        generated_shapes = generator(noise, training=True)        real_output = discriminator(shapes, training=True)        fake_output = discriminator(generated_shapes, training=True)        gen_loss = generator_loss(fake_output)        disc_loss = discriminator_loss(real_output, fake_output)    gen_gradients = gen_tape.gradient(gen_loss, generator.trainable_variables)    disc_gradients = disc_tape.gradient(disc_loss, discriminator.trainable_variables)    optimizer.apply_gradients(zip(gen_gradients, generator.trainable_variables))    optimizer.apply_gradients(zip(disc_gradients, discriminator.trainable_variables))def train(dataset, epochs):    for epoch in range(epochs):        start = time.time()        for shape_batch in dataset:            train_step(shape_batch)        display.clear_output(wait=True)        print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))    display.clear_output(wait=True)      train(X_test, EPOCHS)

回答:

X_test只是一个列表,所以在你的训练循环中,只有一个样本(30*30*30=27000)被输入到模型中,但模型本身需要30(批次大小)* 30 * 30 * 30=810000。

modelnet_path = '/modelnet10.npz'data = np.load(modelnet_path)X, Y = data['X_train'], data['y_train']X_test, Y_test = data['X_test'], data['y_test']X = X.reshape(X.shape[0], 30, 30, 30, 1).astype('float32')...train_dataset = tf.data.Dataset.from_tensor_slices(X).batch(BATCH_SIZE)...def train(dataset, epochs):    for epoch in range(epochs):        start = time.time()        for shape_batch in dataset:            train_step(shape_batch)        display.clear_output(wait=True)        print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))    display.clear_output(wait=True)      train(X_test, EPOCHS)

考虑使用你创建的train_dataset进行训练,或者将X_test生成为tf.dataset。

train(train_dataset , EPOCHS)

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注