我有两个数据集和一个权重数组。(train_X, validation_X, train_Y, validation_Y
和 sampleW
)X 数据集是三维的,而 Y 数据集是二维的 numpy 数组。sampleW
是一个一维的 numpy
数组。
如何成功地从 fit_generator()
迁移到 fit()
函数?
具体来说:
fit(x=None, y=None
是否用于train_X, train_Y
?- 如何单独传递验证数据?(
validation_X, validation_Y
) - 我可以像以前一样传递
sampleW
吗? - 如何在
fit()
中训练分段数据? - 最重要的是:如何在不使用生成器的情况下做到这一点?
这是一个最小的可重现示例(我目前正在努力找出为什么任何批次大小除了1都会报错,但>1也应该可以使用)
# -*- coding: utf-8 -*-from tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Dense,Dropout,LSTM,BatchNormalizationimport tensorflow as tf, numpy as np; from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint tensorboard_path= r"C:\Users\user\documents\session" # <--- your pathcheckpoint_path = tensorboard_path BATCH_SIZE = 1EPOCHS, Input_shape, labels = 3, (20,4),6train_X,train_Y = np.asarray([np.random.random(Input_shape) for x in range(100)]), np.random.random((100,labels))validation_X,validation_Y = np.asarray([np.random.random(Input_shape) for x in range(50)]), np.random.random((50,labels))sampleW = np.random.random((100,1)) class CustomGenerator_SampleW(tf.keras.utils.Sequence) : def __init__(self, list_x, labels, batch_size, sample_weights=None) : self.labels = labels self.batch_size = batch_size self.list_x = list_x self.sample_weights = sample_weights def __len__(self) : return (np.ceil(len(self.list_x) / float(self.batch_size))).astype(np.int) def __getitem__(self, idx) : batch_x = self.list_x[idx * self.batch_size : (idx+1) * self.batch_size] batch_y = self.labels[idx * self.batch_size : (idx+1) * self.batch_size] batch_weight = self.sample_weights[idx * self.batch_size : (idx+1) * self.batch_size] return np.array(batch_x),np.array(batch_y), np.array(batch_weight)class CustomGenerator(tf.keras.utils.Sequence) : def __init__(self, list_x, labels, batch_size) : self.labels = labels self.batch_size = batch_size self.list_x = list_x def __len__(self) : return (np.ceil(len(self.list_x) / float(self.batch_size))).astype(np.int) def __getitem__(self, idx) : batch_x = self.list_x[idx * self.batch_size : (idx+1) * self.batch_size] batch_y = self.labels[idx * self.batch_size : (idx+1) * self.batch_size] return np.array(batch_x),np.array(batch_y) model = Sequential()model.add(LSTM(242, input_shape=Input_shape, return_sequences=True))model.add(Dropout(0.3)); model.add(BatchNormalization()) model.add(LSTM(242, return_sequences=True))model.add(Dropout(0.3)); model.add(BatchNormalization())model.add(Dense(labels, activation='tanh')); model.add(Dropout(0.3))opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)model.compile(loss='mean_absolute_error',optimizer=opt,metrics=['mse'])if sampleW is not None: train_batch_gen = CustomGenerator_SampleW(train_X, train_Y, BATCH_SIZE, sample_weights=sampleW)else: train_batch_gen = CustomGenerator(train_X, train_Y, BATCH_SIZE)validation_batch_gen = CustomGenerator(validation_X, validation_Y, BATCH_SIZE)tensorboard = TensorBoard(tensorboard_path)checkpoint = ModelCheckpoint(checkpoint_path, monitor='val_loss', verbose=1, save_best_only=True, mode='min') model.fit_generator(train_batch_gen, steps_per_epoch=None, epochs=EPOCHS, validation_data = validation_batch_gen, callbacks=[tensorboard,checkpoint])
回答:
这是由于您的模型输出和提供的标签形状不匹配导致的。
模型架构:
如您所见,您的模型输出形状为 (batch_size, 20, 6)
,而您的标签形状为 (batch_size, 6)
,它们是不兼容的。
为什么批次大小为1时能工作?
这是因为 TensorFlow 使用了一种称为广播的技术。例如:
x = np.ones(shape = (1,20,6))array([[[1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.]]])y = np.ones(shape = (1,6))array([[1., 1., 1., 1., 1., 1.]])y-xarray([[[0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.]]])
更多信息请参见 这里。
但当您使用 batch_size = 10
时,广播不再可能。
代码:
x = np.ones(shape = (10,20,6))y = np.ones(shape = (10,6))y-x
输出:
---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-102-4a65323a80fa> in <module> 1 x = np.ones(shape = (10,20,6)) 2 y = np.ones(shape = (10,6))----> 3 y-xValueError: operands could not be broadcast together with shapes (10,6) (10,20,6)
您可以通过在 LSTM 层之后添加一个 Flatten 层来修正模型的形状,将二维向量转换为一维向量。
代码:
model = Sequential()model.add(LSTM(242, input_shape=Input_shape, return_sequences=True))model.add(Dropout(0.3)); model.add(BatchNormalization()) model.add(LSTM(242, return_sequences=True))model.add(Dropout(0.3)); model.add(BatchNormalization())model.add(Flatten())model.add(Dropout(0.3))model.add(Dense(labels, activation='tanh')) opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)model.compile(loss='mean_absolute_error',optimizer=opt,metrics=['mse'])tf.keras.utils.plot_model(model, 'my_first_model.png', show_shapes=True)
模型架构:
最后使用 model.fit()
:
model.fit(train_X, train_Y, epochs=EPOCHS, validation_data=(validation_X, validation_Y), sample_weight=sampleW)