pytorch深度学习训练模型时 一直以为A100比4090快很多所以没尝试过4090 直到有一次试过4090后发现竟速度竟然比A100快了2倍多 请问这是什么原因呢 没有用混合精度训练
两个显卡具体信息如下图
模型没有超过显存的时候,拼的是单卡的性能,而4090系的核心会比A100强50-70左右,所以模型小的情况下快很正常。但是模型大就不一样了(特别是单卡显存载入一个模型无法训练的情况下),40系砍了nvlink,多卡带宽就不行了,而那些大模型哪个不是要多卡并联才能跑的起来的。
另外,A100对应的是30系列的显卡,而40系列显卡对应的是H100,相应的,H100会比A100快个好几倍。
【相关推荐】
完成读取数据、特征转换之后,将特征送入模型进行训练。
训练算法为BERT专用的Adam算法
训练集、测试集、验证集比例为6:2:2
每100轮会在验证集上进行验证,并给出相应的准确值,如果准确值大于此前最高分则保存模型参数,否则flags加1。如果flags大于1000,也即连续1000轮模型的性能都没有继续优化,停止训练过程。
for epoch in range(config.num_epochs):
print('Epoch [{}/{}]'.format(epoch + 1, config.num_epochs))
for i, (trains, labels) in enumerate(train_iter):
outputs = model(trains)
model.zero_grad()
loss = F.cross_entropy(outputs, labels)
loss.backward()
optimizer.step()
if total_batch % 100 == 0:
# 每多少轮输出在训练集和验证集上的效果
true = labels.data.cpu()
predic = torch.max(outputs.data, 1)[1].cpu()
train_acc = metrics.accuracy_score(true, predic)
dev_acc, dev_loss = evaluate(config, model, dev_iter)
if dev_loss < dev_best_loss:
dev_best_loss = dev_loss
torch.save(model.state_dict(), config.save_path)
improve = '*'
last_improve = total_batch
else:
improve = ''
time_dif = get_time_dif(start_time)
msg = 'Iter: {0:>6}, Train Loss: {1:>5.2}, Train Acc: {2:>6.2%}, Val Loss: {3:>5.2}, Val Acc: {4:>6.2%}, Time: {5} {6}'
print(msg.format(total_batch, loss.item(), train_acc, dev_loss, dev_acc, time_dif, improve))
model.train()
total_batch += 1
if total_batch - last_improve > config.require_improvement:
# 验证集loss超过1000batch没下降,结束训练
print("No optimization for a long time, auto-stopping...")
flag = True
break
if flag:
break
test(config, model, test_iter)
训练结果:
1245it [00:00, 6290.83it/s]Loading data...
170004it [00:28, 6068.60it/s]
42502it [00:07, 6017.43it/s]
42502it [00:06, 6228.82it/s]
Time usage: 0:00:42
Epoch [1/5]
Iter: 0, Train Loss: 1.8, Train Acc: 3.12%, Val Loss: 1.7, Val Acc: 9.60%, Time: 0:02:14 *
Iter: 100, Train Loss: 1.5, Train Acc: 25.00%, Val Loss: 1.4, Val Acc: 20.60%, Time: 0:05:10 *
...
Iter: 5300, Train Loss: 0.75, Train Acc: 65.62%, Val Loss: 1.0, Val Acc: 50.07%, Time: 2:45:41 *
Epoch [2/5]
Iter: 5400, Train Loss: 1.0, Train Acc: 62.50%, Val Loss: 1.0, Val Acc: 51.02%, Time: 2:48:46
...
Iter: 7000, Train Loss: 0.77, Train Acc: 75.00%, Val Loss: 1.0, Val Acc: 52.84%, Time: 3:38:26
No optimization for a long time, auto-stopping...
Test Loss: 1.0, Test Acc: 50.89%
Precision, Recall and F1-Score...
precision recall f1-score support
1 0.6157 0.5901 0.6026 3706
2 0.5594 0.1481 0.2342 3532
3 0.4937 0.5883 0.5369 9678
4 0.4903 0.5459 0.5166 12899
5 0.6693 0.6394 0.6540 12687
accuracy 0.5543 42502
macro avg 0.5657 0.5024 0.5089 42502
weighted avg 0.5612 0.5543 0.5463 42502
Time usage: 0:02:25
从训练结果可以看出准确率和F1分数最多只能达到60%,其实仔细分析评论也可以知道原因:
相近分数的差异性与评论相关性不大,比如两分的评论可能有时候与一分三分是一样的,这就导致很难根据评论准确的预测出分数,但是从测试结果可以明显的看出好评和差评能够明显区分出来,准确率能达到百分之九十。