每次我尝试运行 model.predict()
时,如果图片过大就会抛出错误(这点可以理解),但错误信息显示 tensorflow/core/framework/allocator.cc:101] Allocation of 3717120800 exceeds 10% of system memory
是的,我的系统有32GB内存,但为什么不能使用20%甚至30%呢?(顺便说一下,这些测试中CUDA是被禁用的,因为我的GPU只有6GB)另外:我知道这是一个警告而不是错误,但程序在几秒后崩溃了,并且没有给我其他输出 ;(
这是模型代码:
def build_dce_net(): input_img = keras.Input(shape=[None, None, 3]) conv1 = layers.Conv2D( 32, (3, 3), strides=(1, 1), activation="relu", padding="same" )(input_img) conv2 = layers.Conv2D( 64, (3, 3), strides=(1, 1), activation="relu", padding="same" )(conv1) conv3 = layers.Conv2D( 96, (3, 3), strides=(1, 1), activation="relu", padding="same" )(conv2) conv4 = layers.Conv2D( 96, (3, 3), strides=(1, 1), activation="relu", padding="same" )(conv3) int_con1 = layers.Concatenate(axis=-1)([conv4, conv3]) conv5 = layers.Conv2D( 64, (3, 3), strides=(1, 1), activation="relu", padding="same" )(int_con1) int_con2 = layers.Concatenate(axis=-1)([conv5, conv2]) conv6 = layers.Conv2D( 32, (3, 3), strides=(1, 1), activation="relu", padding="same" )(int_con2) int_con3 = layers.Concatenate(axis=-1)([conv6, conv1]) x_r = layers.Conv2D(24, (3, 3), strides=(1, 1), activation="tanh", padding="same")( int_con3 ) #return keras.models.load_model('./high-res-trained') return keras.Model(inputs=input_img, outputs=x_r)
是的,所有的代码都是正常缩进的,但在stackoverflow上仍然无法正常工作
编辑:在Ubuntu上运行模型后,我得到了一个更有用的日志:
2022-05-31 13:41:27.744568: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 9663676416 exceeds 10% of free system memory.2022-05-31 13:41:29.461537: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 14495514624 exceeds 10% of free system memory.terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_allocAborted
回答:
我也有同样的问题。我在Linux上设置了交换内存。然后问题就解决了。