将Keras模型转换为多标签输出

我有一个模型,它接受一个看起来像这样的数据框:

image,level10_left,010_right,013_left,0

模型结构如下:

base_image_dir = 'extra_data/dr/'retina_df = pd.read_csv(os.path.join(base_image_dir, 'trainLabels.csv'))retina_df['PatientId'] = retina_df['image'].map(lambda x: x.split('_')[0])retina_df['path'] = retina_df['image'].map(lambda x: os.path.join(base_image_dir,'train',                                                         '{}.jpeg'.format(x)))retina_df['exists'] = retina_df['path'].map(os.path.exists)print(retina_df['exists'].sum(), 'images found of', retina_df.shape[0], 'total')retina_df['eye'] = retina_df['image'].map(lambda x: 1 if x.split('_')[-1]=='left' else 0)from keras.utils.np_utils import to_categoricalretina_df['level_cat'] = retina_df['level'].map(lambda x: to_categorical(x, 1+retina_df['level'].max()))retina_df.dropna(inplace = True)retina_df = retina_df[retina_df['exists']]retina_df.sample(3)from sklearn.model_selection import train_test_splitrr_df = retina_df[['PatientId', 'level']].drop_duplicates()train_ids, valid_ids = train_test_split(rr_df['PatientId'],                                    test_size = 0.25,                                    random_state = 2018,                                   stratify = rr_df['level'])raw_train_df = retina_df[retina_df['PatientId'].isin(train_ids)]valid_df = retina_df[retina_df['PatientId'].isin(valid_ids)]import pdb;pdb.set_trace()print('train', raw_train_df.shape[0], 'validation', valid_df.shape[0])train_df = raw_train_df.groupby(['level', 'eye']).apply(lambda x: x.sample(75, replace = True)                                                    ).reset_index(drop = True)print('New Data Size:', train_df.shape[0], 'Old Size:', raw_train_df.shape[0])import tensorflow as tffrom keras import backend as Kfrom keras.applications.inception_v3 import preprocess_inputimport numpy as npIMG_SIZE = (512, 512) # slightly smaller than vgg16 normally expectsdef tf_image_loader(out_size,                       horizontal_flip = True,                       vertical_flip = False,                      random_brightness = True,                     random_contrast = True,                    random_saturation = True,                    random_hue = True,                      color_mode = 'rgb',                       preproc_func = preprocess_input,                       on_batch = False):    def _func(X):        with tf.name_scope('image_augmentation'):            with tf.name_scope('input'):                X = tf.image.decode_png(tf.read_file(X), channels = 3 if color_mode == 'rgb' else 0)                X = tf.image.resize_images(X, out_size)            with tf.name_scope('augmentation'):                if horizontal_flip:                    X = tf.image.random_flip_left_right(X)                if vertical_flip:                    X = tf.image.random_flip_up_down(X)                if random_brightness:                    X = tf.image.random_brightness(X, max_delta = 0.1)                if random_saturation:                    X = tf.image.random_saturation(X, lower = 0.75, upper = 1.5)                if random_hue:                    X = tf.image.random_hue(X, max_delta = 0.15)                if random_contrast:                    X = tf.image.random_contrast(X, lower = 0.75, upper = 1.5)                return preproc_func(X)    if on_batch:         # we are meant to use it on a batch        def _batch_func(X, y):            return tf.map_fn(_func, X), y        return _batch_func    else:        # we apply it to everything        def _all_func(X, y):            return _func(X), y                 return _all_funcdef tf_augmentor(out_size,                intermediate_size = (640, 640),                 intermediate_trans = 'crop',                 batch_size = 16,                   horizontal_flip = True,                   vertical_flip = False,                  random_brightness = True,                 random_contrast = True,                 random_saturation = True,                    random_hue = True,                  color_mode = 'rgb',                   preproc_func = preprocess_input,                   min_crop_percent = 0.001,                   max_crop_percent = 0.005,                   crop_probability = 0.5,                   rotation_range = 10):    load_ops = tf_image_loader(out_size = intermediate_size,                                horizontal_flip=horizontal_flip,                                vertical_flip=vertical_flip,                                random_brightness = random_brightness,                               random_contrast = random_contrast,                               random_saturation = random_saturation,                               random_hue = random_hue,                               color_mode = color_mode,                               preproc_func = preproc_func,                               on_batch=False)    def batch_ops(X, y):        batch_size = tf.shape(X)[0]        with tf.name_scope('transformation'):            # code borrowed from https://becominghuman.ai/data-augmentation-on-gpu-in-tensorflow-13d14ecf2b19            # The list of affine transformations that our image will go under.            # Every element is Nx8 tensor, where N is a batch size.            transforms = []            identity = tf.constant([1, 0, 0, 0, 1, 0, 0, 0], dtype=tf.float32)            if rotation_range > 0:                angle_rad = rotation_range / 180 * np.pi                angles = tf.random_uniform([batch_size], -angle_rad, angle_rad)                transforms += [tf.contrib.image.angles_to_projective_transforms(angles, intermediate_size[0], intermediate_size[1])]            if crop_probability > 0:                crop_pct = tf.random_uniform([batch_size], min_crop_percent, max_crop_percent)                left = tf.random_uniform([batch_size], 0, intermediate_size[0] * (1.0 - crop_pct))                top = tf.random_uniform([batch_size], 0, intermediate_size[1] * (1.0 - crop_pct))                crop_transform = tf.stack([                      crop_pct,                      tf.zeros([batch_size]), top,                      tf.zeros([batch_size]), crop_pct, left,                      tf.zeros([batch_size]),                      tf.zeros([batch_size])                  ], 1)                coin = tf.less(tf.random_uniform([batch_size], 0, 1.0), crop_probability)                transforms += [tf.where(coin, crop_transform, tf.tile(tf.expand_dims(identity, 0), [batch_size, 1]))]            if len(transforms)>0:                X = tf.contrib.image.transform(X,                      tf.contrib.image.compose_transforms(*transforms),                      interpolation='BILINEAR') # or 'NEAREST'            if intermediate_trans=='scale':                X = tf.image.resize_images(X, out_size)            elif intermediate_trans=='crop':                X = tf.image.resize_image_with_crop_or_pad(X, out_size[0], out_size[1])            else:                raise ValueError('Invalid Operation {}'.format(intermediate_trans))            return X, y    def _create_pipeline(in_ds):        batch_ds = in_ds.map(load_ops, num_parallel_calls=4).batch(batch_size)        return batch_ds.map(batch_ops)    return _create_pipelinedef flow_from_dataframe(idg,                         in_df,                         path_col,                        y_col,                         shuffle = True,                         color_mode = 'rgb'):    files_ds = tf.data.Dataset.from_tensor_slices((in_df[path_col].values,                                                    np.stack(in_df[y_col].values,0)))    in_len = in_df[path_col].values.shape[0]    while True:        if shuffle:            files_ds = files_ds.shuffle(in_len) # shuffle the whole dataset        next_batch = idg(files_ds).repeat().make_one_shot_iterator().get_next()        for i in range(max(in_len//32,1)):            # NOTE: if we loop here it is 'thread-safe-ish' if we loop on the outside it is completely unsafe            yield K.get_session().run(next_batch)batch_size = 48core_idg = tf_augmentor(out_size = IMG_SIZE,                         color_mode = 'rgb',                         vertical_flip = True,                        crop_probability=0.0, # crop doesn't work yet                        batch_size = batch_size) valid_idg = tf_augmentor(out_size = IMG_SIZE, color_mode = 'rgb',                          crop_probability=0.0,                          horizontal_flip = False,                          vertical_flip = False,                          random_brightness = False,                         random_contrast = False,                         random_saturation = False,                         random_hue = False,                         rotation_range = 0,                        batch_size = batch_size)train_gen = flow_from_dataframe(core_idg, train_df,                              path_col = 'path',                            y_col = 'level_cat')valid_gen = flow_from_dataframe(valid_idg, valid_df,                              path_col = 'path',                            y_col = 'level_cat') # we can use much larger batches for evaluationt_x, t_y = next(valid_gen)t_x, t_y = next(train_gen)from keras.applications.vgg16 import VGG16 as PTModelfrom keras.applications.inception_resnet_v2 import InceptionResNetV2 as PTModelfrom keras.applications.inception_v3 import InceptionV3 as PTModelfrom keras.layers import GlobalAveragePooling2D, Dense, Dropout, Flatten, Input, Conv2D, multiply, LocallyConnected2D, Lambdafrom keras.models import Modelin_lay = Input(t_x.shape[1:])base_pretrained_model = PTModel(input_shape =  t_x.shape[1:], include_top = False, weights = 'imagenet')base_pretrained_model.trainable = Falsept_depth = base_pretrained_model.get_output_shape_at(0)[-1]pt_features = base_pretrained_model(in_lay)from keras.layers import BatchNormalizationbn_features = BatchNormalization()(pt_features)attn_layer = Conv2D(64, kernel_size = (1,1), padding = 'same', activation = 'relu')(Dropout(0.5)(bn_features))attn_layer = Conv2D(16, kernel_size = (1,1), padding = 'same', activation = 'relu')(attn_layer)attn_layer = Conv2D(8, kernel_size = (1,1), padding = 'same', activation = 'relu')(attn_layer)attn_layer = Conv2D(1,                     kernel_size = (1,1),                     padding = 'valid',                     activation = 'sigmoid')(attn_layer)# fan it out to all of the channelsup_c2_w = np.ones((1, 1, 1, pt_depth))up_c2 = Conv2D(pt_depth, kernel_size = (1,1), padding = 'same',                activation = 'linear', use_bias = False, weights = [up_c2_w])up_c2.trainable = Falseattn_layer = up_c2(attn_layer)mask_features = multiply([attn_layer, bn_features])gap_features = GlobalAveragePooling2D()(mask_features)gap_mask = GlobalAveragePooling2D()(attn_layer)# to account for missing values from the attention modelgap = Lambda(lambda x: x[0]/x[1], name = 'RescaleGAP')([gap_features, gap_mask])gap_dr = Dropout(0.25)(gap)dr_steps = Dropout(0.25)(Dense(128, activation = 'relu')(gap_dr))out_layer = Dense(t_y.shape[-1], activation = 'softmax')(dr_steps)retina_model = Model(inputs = [in_lay], outputs = [out_layer])from keras.metrics import top_k_categorical_accuracydef top_2_accuracy(in_gt, in_pred):    return top_k_categorical_accuracy(in_gt, in_pred, k=2)retina_model.compile(optimizer = 'adam', loss = 'categorical_crossentropy',                           metrics = ['categorical_accuracy', top_2_accuracy])retina_model.summary()from keras.callbacks import ModelCheckpoint, LearningRateScheduler, EarlyStopping, ReduceLROnPlateauweight_path="{}_weights.best.hdf5".format('retina')checkpoint = ModelCheckpoint(weight_path, monitor='val_loss', verbose=1,                              save_best_only=True, mode='min', save_weights_only = True)reduceLROnPlat = ReduceLROnPlateau(monitor='val_loss', factor=0.8, patience=3, verbose=1, mode='auto', epsilon=0.0001, cooldown=5, min_lr=0.0001)early = EarlyStopping(monitor="val_loss",                       mode="min",                       patience=6) # probably needs to be more patient, but kaggle time is limitedcallbacks_list = [checkpoint, early, reduceLROnPlat]retina_model.fit_generator(train_gen,                            steps_per_epoch = train_df.shape[0]//batch_size,                           validation_data = valid_gen,                            validation_steps = valid_df.shape[0]//batch_size,                              epochs = 25,                               callbacks = callbacks_list,                             workers = 0, # tf-generators are not thread-safe                             use_multiprocessing=False,                              max_queue_size = 0                            )retina_model.load_weights(weight_path)retina_model.save('full_retina_model.h5')

我知道这是一个很大的代码块,但我想要做的是接受一个看起来像这样的数据框:

image,N,D,G,C,A,H,M,O2857_left,1,0,0,0,0,0,0,03151_left,1,0,0,0,0,0,0,03113_left,1,0,0,0,0,0,0,0

为了实现这一目标,我做了以下更改:

from sklearn.model_selection import train_test_splitrr_df = retina_dfy = rr_df[['N', 'D', 'G','C','A', 'H', 'M', 'O']]train_ids, valid_ids = train_test_split(rr_df['PatientId'],                                    test_size = 0.25,                                    random_state = 2018)raw_train_df = retina_df[retina_df['PatientId'].isin(train_ids)]valid_df = retina_df[retina_df['PatientId'].isin(valid_ids)]print('train', raw_train_df.shape[0], 'validation', valid_df.shape[0])train_df = raw_train_dffrom keras import regularizers, optimizersfrom keras.layers import BatchNormalizationin_lay = Input(t_x.shape[1:])base_pretrained_model = PTModel(input_shape =  t_x.shape[1:], include_top = False, weights = 'imagenet')base_pretrained_model.trainable = Falsept_depth = base_pretrained_model.get_output_shape_at(0)[-1]pt_features = base_pretrained_model(in_lay)bn_features = BatchNormalization()(pt_features)# 这里我们使用一个注意力机制来开启和关闭GAP中的像素attn_layer = Conv2D(64, kernel_size = (1,1), padding = 'same', activation = 'relu')(Dropout(0.5)(bn_features))attn_layer = Conv2D(16, kernel_size = (1,1), padding = 'same', activation = 'relu')(attn_layer)attn_layer = Conv2D(8, kernel_size = (1,1), padding = 'same', activation = 'relu')(attn_layer)attn_layer = Conv2D(1,                     kernel_size = (1,1),                     padding = 'valid',                     activation = 'sigmoid')(attn_layer)# 将其扩展到所有通道up_c2_w = np.ones((1, 1, 1, pt_depth))up_c2 = Conv2D(pt_depth, kernel_size = (1,1), padding = 'same',                activation = 'linear', use_bias = False, weights = [up_c2_w])up_c2.trainable = Falseattn_layer = up_c2(attn_layer)mask_features = multiply([attn_layer, bn_features])gap_features = GlobalAveragePooling2D()(mask_features)gap_mask = GlobalAveragePooling2D()(attn_layer)# 为了处理注意力模型中缺失的值gap = Lambda(lambda x: x[0]/x[1], name = 'RescaleGAP')([gap_features, gap_mask])gap_dr = Dropout(0.25)(gap)x = Dropout(0.25)(Dense(128, activation = 'relu')(gap_dr))# out_layer = Dense(t_y.shape[-1], activation = 'softmax')(dr_steps)output1 = Dense(1, activation = 'sigmoid')(x)output2 = Dense(1, activation = 'sigmoid')(x)output3 = Dense(1, activation = 'sigmoid')(x)output4 = Dense(1, activation = 'sigmoid')(x)output5 = Dense(1, activation = 'sigmoid')(x)output6 = Dense(1, activation = 'sigmoid')(x)output7 = Dense(1, activation = 'sigmoid')(x)output8 = Dense(1, activation = 'sigmoid')(x)retina_model = Model(inputs = [in_lay], outputs = [output1,output2,output3,output4,output5, output6, output7, output8])# retina_model = Model([in_lay],output1,output2,output3,output4,output5, output6, output7, output8)# retina_model.build(t_x.shape[1:]) # `input_shape`是输入数据的形状# print(model.summary())# retina_model.compile(optimizers.rmsprop(lr = 0.00001, decay = 1e-6),loss = ["binary_crossentropy","binary_crossentropy","binary_crossentropy","binary_crossentropy", "binary_crossentropy","binary_crossentropy","binary_crossentropy","binary_crossentropy"]#,metrics = ["accuracy"])# retina_model = Model(inputs = [in_lay], outputs = [out_layer])# from keras.metrics import top_k_categorical_accuracy# def top_2_accuracy(in_gt, in_pred):    # return top_k_categorical_accuracy(in_gt, in_pred, k=2)retina_model.compile(optimizer = 'adam', loss = loss,                           metrics = ['accuracy'])retina_model.summary()

但当我运行这个时,我得到:

 ValueError: 检查模型目标时出错:您传递给模型的Numpy数组列表大小与模型期望的不符。期望看到8个数组,但实际上收到的列表中只有1个数组:[array([[1, 0, 0, 0, 0, 0, 0, 0],         [0, 0, 0, 0, 0, 0, 0, 1],         [0, 1, 0, 0, 0, 0, 0, 0],         [0, 0, 0, 1, 0, 0, 0, 0],         [1, 0, 0, 0, 0, 0, 0, 0],         [1, 0, 0, 0, 0, 0, 0, 0],   ...

关于如何更改此模型以训练多标签输入,您有何建议?提前感谢。


回答:

您正试图训练一个具有8个不同输出(每个输出长度为1)的模型,但您的目标值是一个长度为8的数组。

最简单的解决方法是替换:

output1 = Dense(1, activation = 'sigmoid')(x)output2 = Dense(1, activation = 'sigmoid')(x)output3 = Dense(1, activation = 'sigmoid')(x)output4 = Dense(1, activation = 'sigmoid')(x)output5 = Dense(1, activation = 'sigmoid')(x)output6 = Dense(1, activation = 'sigmoid')(x)output7 = Dense(1, activation = 'sigmoid')(x)output8 = Dense(1, activation = 'sigmoid')(x)loss = ["binary_crossentropy","binary_crossentropy","binary_crossentropy","binary_crossentropy", "binary_crossentropy","binary_crossentropy","binary_crossentropy","binary_crossentropy"]#,metrics = ["accuracy"])

为:

# 保持sigmoid,不要在多标签问题中将其更改为softmaxoutput = Dense(8, activation = 'sigmoid')(x) loss = "binary_crossentropy"

否则,您必须创建一个自定义生成器,以便生成一个包含8个目标的列表来馈送您的网络

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注