我在Colab Pro的GPU上运行一个卷积神经网络。我已经在运行时选择了GPU,并且可以确认GPU是可用的。我运行的网络与昨晚完全相同,但每个epoch需要大约2个小时…昨晚每个epoch只需要大约3分钟…没有任何变化。我感觉Colab可能限制了我的GPU使用,但我无法确定这是否是问题所在。GPU速度会根据一天中的时间等因素有较大波动吗?这里是一些我打印的诊断信息,有人知道我如何更深入地调查这种慢速行为的根本原因吗?
我还尝试将Colab中的加速器改为’无’,我的网络速度与选择’GPU’时相同,这意味着由于某些原因我不再在GPU上训练,或者资源受到了严重限制。我使用的是Tensorflow 2.1。
gpu_info = !nvidia-smigpu_info = '\n'.join(gpu_info)if gpu_info.find('failed') >= 0: print('Select the Runtime → "Change runtime type" menu to enable a GPU accelerator, ') print('and then re-execute this cell.')else: print(gpu_info)Sun Mar 22 11:33:14 2020 +-----------------------------------------------------------------------------+| NVIDIA-SMI 440.64.00 Driver Version: 418.67 CUDA Version: 10.1 ||-------------------------------+----------------------+----------------------+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. ||===============================+======================+======================|| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 || N/A 40C P0 32W / 250W | 8747MiB / 16280MiB | 0% Default |+-------------------------------+----------------------+----------------------++-----------------------------------------------------------------------------+| Processes: GPU Memory || GPU PID Type Process name Usage ||=============================================================================|+-----------------------------------------------------------------------------+
def mem_report(): print("CPU RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available )) GPUs = GPUtil.getGPUs() for i, gpu in enumerate(GPUs): print('GPU {:d} ... Mem Free: {:.0f}MB / {:.0f}MB | Utilization {:3.0f}%'.format(i, gpu.memoryFree, gpu.memoryTotal, gpu.memoryUtil*100))mem_report()
CPU RAM Free: 24.5 GBGPU 0 ... Mem Free: 7533MB / 16280MB | Utilization 54%
仍然没有办法加速,这里是我的代码,也许我忽略了一些东西…顺便说一下,这些图片来自一个旧的Kaggle比赛,数据可以在这里找到。训练图片保存在我的Google Drive上。 https://www.kaggle.com/c/datasciencebowl
#loading images from kaggle api#os.environ['KAGGLE_USERNAME'] = ""#os.environ['KAGGLE_KEY'] = ""#!kaggle competitions download -c datasciencebowl#unpacking zip files#zipfile.ZipFile('./sampleSubmission.csv.zip', 'r').extractall('./')#zipfile.ZipFile('./test.zip', 'r').extractall('./')#zipfile.ZipFile('./train.zip', 'r').extractall('./')data_dir = pathlib.Path('train')image_count = len(list(data_dir.glob('*/*.jpg')))CLASS_NAMES = np.array([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"])shrimp_zoea = list(data_dir.glob('shrimp_zoea/*'))for image_path in shrimp_zoea[:5]: display.display(Image.open(str(image_path)))
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, validation_split=0.2) #rotation_range = 40, #width_shift_range = 0.2, #height_shift_range = 0.2, #shear_range = 0.2, #zoom_range = 0.2, #horizontal_flip = True, #fill_mode='nearest')
validation_split = 0.2BATCH_SIZE = 32BATCH_SIZE_VALID = 10IMG_HEIGHT = 224IMG_WIDTH = 224STEPS_PER_EPOCH = np.ceil(image_count*(1-(validation_split))/BATCH_SIZE)VALIDATION_STEPS = np.ceil((image_count*(validation_split)/BATCH_SIZE))
train_data_gen = image_generator.flow_from_directory(directory=str(data_dir), subset='training', batch_size=BATCH_SIZE, class_mode = 'categorical', shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), classes = list(CLASS_NAMES))validation_data_gen = image_generator.flow_from_directory(directory=str(data_dir), subset='validation', batch_size=BATCH_SIZE_VALID, class_mode = 'categorical', shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), classes = list(CLASS_NAMES))
model_basic = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(224, 224, 3)), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(1000, activation='relu'), tf.keras.layers.Dense(121, activation='softmax')])model_basic.summary()
model_basic.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
history = model_basic.fit( train_data_gen, epochs=10, verbose=1, validation_data=validation_data_gen, steps_per_epoch=STEPS_PER_EPOCH, validation_steps=VALIDATION_STEPS, initial_epoch=0 )
回答:
你的nvidia-smi
输出清楚地显示了一个GPU已连接。你的训练数据存储在哪里?如果不是存储在本地磁盘上,我建议你将数据存储在那里。训练数据的远程传输速度可能会根据你的Colab后端所在的位置而有所不同。