我在创建用于自然语言处理的神经网络时,从一个Embedding
层开始(使用预训练的嵌入)。但当我在Keras(TensorFlow后端)中声明Embedding
层时,遇到了ResourceExhaustedError
:
ResourceExhaustedError: OOM when allocating tensor with shape[137043,300] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node embedding_4/random_uniform/RandomUniform}} = RandomUniform[T=DT_INT32, dtype=DT_FLOAT, seed=87654321, seed2=9524682, _device="/job:localhost/replica:0/task:0/device:GPU:0"](embedding_4/random_uniform/shape)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
我已经查过了Google:大多数ResourceExhaustedError
发生在训练时,这是因为GPU的RAM不够大。可以通过减小批次大小来解决这个问题。
但在我这里,我甚至还没有开始训练!问题出在这一行:
q1 = Embedding(nb_words + 1, param['embed_dim'].value, weights=[word_embedding_matrix], input_length=param['sentence_max_len'].value)(question1)
这里,word_embedding_matrix
是一个大小为(137043, 300)
的矩阵,即预训练的嵌入。
据我所知,这不会占用巨大的内存(不像这里):
137043 * 300 * 4 bytes = 53 kiB
这是使用的GPU:
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 396.26 Driver Version: 396.26 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 108... Off | 00000000:02:00.0 Off | N/A | | 23% 32C P8 16W / 250W | 6956MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX 108... Off | 00000000:03:00.0 Off | N/A | | 23% 30C P8 16W / 250W | 530MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 GeForce GTX 108... Off | 00000000:82:00.0 Off | N/A | | 23% 34C P8 16W / 250W | 333MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 GeForce GTX 108... Off | 00000000:83:00.0 Off | N/A | | 24% 46C P2 58W / 250W | 4090MiB / 11178MiB | 23% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1087 C uwsgi 1331MiB | | 0 1088 C uwsgi 1331MiB | | 0 1089 C uwsgi 1331MiB | | 0 1090 C uwsgi 1331MiB | | 0 1091 C uwsgi 1331MiB | | 0 4176 C /usr/bin/python3 289MiB | | 1 2631 C ...e92/venvs/wordintent_venv/bin/python3.6 207MiB | | 1 4176 C /usr/bin/python3 313MiB | | 2 4176 C /usr/bin/python3 323MiB | | 3 4176 C /usr/bin/python3 347MiB | | 3 10113 C python 1695MiB | | 3 13614 C python3 1347MiB | | 3 14116 C python 689MiB | +-----------------------------------------------------------------------------+
有谁知道我为什么会遇到这个异常吗?
回答:
从这个链接来看,将TensorFlow配置为不直接分配最大GPU内存似乎可以解决这个问题。
在声明模型层之前运行以下代码可以解决问题:
config = tf.ConfigProto()config.gpu_options.allow_growth = Trueconfig.gpu_options.per_process_gpu_memory_fraction = 0.3session = tf.Session(config=config)K.set_session(session)