跑训练模型,Learning rate:0.0004, batch_size:5,结果报错
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 0; 6.00 GiB total capacity; 1.77 GiB already allocated; 2.11 GiB free; 1.85 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
我没搞懂:2.11Gfree,还有这么多,为啥报错显存不够呢?用我自己笔记本跑的,3060的卡。是我没设置好吗?
一般建议batchsize设置为2或者2的n次方,建议你把batchsize设成1试一下,报错就是显存不够导致的。