无法将数据拟合到3D卷积U-net Keras

我遇到了一个问题。我想构建一个3D卷积U-net。为此,我使用了Keras。

我的数据来自2017年数据科学碗竞赛的MRI图像。所有的MRI图像都被保存为numpy数组(所有像素值已缩放到0到1之间),形状为:

data_ch.shape(94, 50, 50, 50, 1)

94 – 患者,50个MRI切片,每个切片为50×50的图像,1个通道:数据集中患者的MRI图像

我想构建一个3D卷积U-net,因此这个网络的输入和输出是相同的3D数组。3D U-net的结构如下:

input_img= Input(shape=(data_ch.shape[1], data_ch.shape[2], data_ch.shape[3], data_ch.shape[4]))x=Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu', padding='same')(input_img)x=MaxPooling3D(pool_size=(2, 2, 2), padding='same')(x)x=Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu', padding='same')(x)x=MaxPooling3D(pool_size=(2, 2, 2), padding='same')(x)x=UpSampling3D(size=(2, 2, 2))(x)x=Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu', padding='same')(x) # PADDING IS NOT THE SAME!!!!!x=UpSampling3D(size=(2, 2, 2))(x)x=Conv3D(filters=1, kernel_size=(3, 3, 3), activation='sigmoid')(x)model=Model(input_img, x)model.compile(optimizer='adadelta', loss='binary_crossentropy')model.summary()Layer (type)                 Output Shape              Param #   =================================================================input_5 (InputLayer)         (None, 50, 50, 50, 1)     0         _________________________________________________________________conv3d_27 (Conv3D)           (None, 50, 50, 50, 8)     224       _________________________________________________________________max_pooling3d_12 (MaxPooling (None, 25, 25, 25, 8)     0         _________________________________________________________________conv3d_28 (Conv3D)           (None, 25, 25, 25, 8)     1736      _________________________________________________________________max_pooling3d_13 (MaxPooling (None, 13, 13, 13, 8)     0         _________________________________________________________________up_sampling3d_12 (UpSampling (None, 26, 26, 26, 8)     0         _________________________________________________________________conv3d_29 (Conv3D)           (None, 26, 26, 26, 8)     1736      _________________________________________________________________up_sampling3d_13 (UpSampling (None, 52, 52, 52, 8)     0         _________________________________________________________________conv3d_30 (Conv3D)           (None, 50, 50, 50, 1)     217       =================================================================Total params: 3,913Trainable params: 3,913Non-trainable params: 0

但是,当我尝试将数据拟合到这个网络时:

model.fit(data_ch, data_ch, epochs=1, batch_size=10, shuffle=True, verbose=1)

程序显示了一个错误:

ValueError                                Traceback (most recent call last)C:\Users\Taranov\Anaconda3\lib\site-packages\theano\compile\function_module.py in __call__(self, *args, **kwargs)    883             outputs =\--> 884                 self.fn() if output_subset is None else\    885                 self.fn(output_subset=output_subset)ValueError: CudaNdarray_CopyFromCudaNdarray: need same dimensions for dim 1, destination=13, source=14During handling of the above exception, another exception occurred:ValueError                                Traceback (most recent call last)<ipython-input-26-b334d38d9608> in <module>()----> 1 model.fit(data_ch, data_ch, epochs=1, batch_size=10, shuffle=True, verbose=1)C:\Users\Taranov\Anaconda3\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)   1496                               val_f=val_f, val_ins=val_ins, shuffle=shuffle,   1497                               callback_metrics=callback_metrics,-> 1498                               initial_epoch=initial_epoch)   1499    1500     def evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None):C:\Users\Taranov\Anaconda3\lib\site-packages\keras\engine\training.py in _fit_loop(self, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch)   1150                 batch_logs['size'] = len(batch_ids)   1151                 callbacks.on_batch_begin(batch_index, batch_logs)-> 1152                 outs = f(ins_batch)   1153                 if not isinstance(outs, list):   1154                     outs = [outs]C:\Users\Taranov\Anaconda3\lib\site-packages\keras\backend\theano_backend.py in __call__(self, inputs)   1156     def __call__(self, inputs):   1157         assert isinstance(inputs, (list, tuple))-> 1158         return self.function(*inputs)   1159    1160 C:\Users\Taranov\Anaconda3\lib\site-packages\theano\compile\function_module.py in __call__(self, *args, **kwargs)    896                     node=self.fn.nodes[self.fn.position_of_error],    897                     thunk=thunk,--> 898                     storage_map=getattr(self.fn, 'storage_map', None))    899             else:    900                 # old-style linkers raise their own exceptionsC:\Users\Taranov\Anaconda3\lib\site-packages\theano\gof\link.py in raise_with_op(node, thunk, exc_info, storage_map)    323         # extra long error message in that case.    324         pass--> 325     reraise(exc_type, exc_value, exc_trace)    326     327 C:\Users\Taranov\Anaconda3\lib\site-packages\six.py in reraise(tp, value, tb)    683             value = tp()    684         if value.__traceback__ is not tb:--> 685             raise value.with_traceback(tb)    686         raise value    687 C:\Users\Taranov\Anaconda3\lib\site-packages\theano\compile\function_module.py in __call__(self, *args, **kwargs)    882         try:    883             outputs =\--> 884                 self.fn() if output_subset is None else\    885                 self.fn(output_subset=output_subset)    886         except Exception:ValueError: CudaNdarray_CopyFromCudaNdarray: need same dimensions for dim 1, destination=13, source=14Apply node that caused the error: GpuAlloc(GpuDimShuffle{0,2,x,3,4,1}.0, Shape_i{0}.0, TensorConstant{13}, TensorConstant{2}, TensorConstant{13}, TensorConstant{13}, TensorConstant{8})Toposort index: 163Inputs types: [CudaNdarrayType(float32, (False, False, True, False, False, False)), TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int8, scalar), TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, scalar)]Inputs shapes: [(10, 14, 1, 14, 14, 8), (), (), (), (), (), ()]Inputs strides: [(21952, 196, 0, 14, 1, 2744), (), (), (), (), (), ()]Inputs values: ['not shown', array(10, dtype=int64), array(13, dtype=int64), array(2, dtype=int8), array(13, dtype=int64), array(13, dtype=int64), array(8, dtype=int64)]Outputs clients: [[GpuReshape{5}(GpuAlloc.0, MakeVector{dtype='int64'}.0)]]HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

我尝试按照建议使用theano标志:

import theanoimport osos.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=gpu,floatX=float32, optimizer='None',exception_verbosity=high"

但它仍然不起作用。

你能帮我吗?非常感谢!


回答:

好的…这听起来很奇怪,但MaxPooling3D在使用padding='same'时似乎存在某种bug。所以我写了你的代码,去掉了这个padding,并在开始时添加了一个padding以使你的维度兼容:

import keras.backend as KinputShape = (data_ch.shape[1], data_ch.shape[2], data_ch.shape[3], data_ch.shape[4])paddedShape = (data_ch.shape[1]+2, data_ch.shape[2]+2, data_ch.shape[3]+2, data_ch.shape[4])#初始paddinginput_img= Input(shape=inputShape)x = Lambda(lambda x: K.spatial_3d_padding(x, padding=((1, 1), (1, 1), (1, 1))),    output_shape=paddedShape)(input_img) #Lambda层需要output_shape#你的原始代码,去掉了MaxPooling层的padding(用x替换input_img)x=Conv3D(filters=8, kernel_size=3, activation='relu', padding='same')(x)x=MaxPooling3D(pool_size=2)(x)x=Conv3D(filters=8, kernel_size=3, activation='relu', padding='same')(x)x=MaxPooling3D(pool_size=2)(x)x=UpSampling3D(size=2)(x)x=Conv3D(filters=8, kernel_size=3, activation='relu', padding='same')(x) # PADDING IS NOT THE SAME!!!!!x=UpSampling3D(size=2)(x)x=Conv3D(filters=1, kernel_size=3, activation='sigmoid')(x)model=Model(input_img, x)model.compile(optimizer='adadelta', loss='binary_crossentropy')model.summary()print(model.predict(data_ch)[1])model.fit(data_ch,data_ch,epochs=1,verbose=2,batch_size=10)

Related Posts

Keras Dense层输入未被展平

这是我的测试代码: from keras import…

无法将分类变量输入随机森林

我有10个分类变量和3个数值变量。我在分割后直接将它们…

如何在Keras中对每个输出应用Sigmoid函数?

这是我代码的一部分。 model = Sequenti…

如何选择类概率的最佳阈值?

我的神经网络输出是一个用于多标签分类的预测类概率表: …

在Keras中使用深度学习得到不同的结果

我按照一个教程使用Keras中的深度神经网络进行文本分…

‘MatMul’操作的输入’b’类型为float32,与参数’a’的类型float64不匹配

我写了一个简单的TensorFlow代码,但不断遇到T…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注