D:\Anaconda3\envs\pytorch\python.exe "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2.2\plugins\python-ce\helpers\pydev\pydevd.py" --multiproc --qt-support=auto --client 127.0.0.1 --port 51506 --file G:/deep-learning-for-image-processing-master/pytorch_segmentation/deeplab_v3/train.py
Connected to pydev debugger (build 212.5284.44)
Traceback (most recent call last):
File "G:\deep-learning-for-image-processing-master\pytorch_segmentation\deeplab_v3\train_utils\train_and_eval.py", line 46, in train_one_epoch
loss = criterion(output, target)
File "G:\deep-learning-for-image-processing-master\pytorch_segmentation\deeplab_v3\train_utils\train_and_eval.py", line 10, in criterion
losses[name] = nn.functional.cross_entropy(x, target) #原为(x, target, ignore_index=255)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\functional.py", line 2846, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of size: : [4, 240, 240, 3]
python-BaseException
for epoch in range(args.start_epoch, args.epochs):
mean_loss, lr = train_one_epoch(model, optimizer, train_loader, device, epoch,
lr_scheduler=lr_scheduler, print_freq=args.print_freq, scaler=scaler)
confmat = evaluate(model, val_loader, device=device, num_classes=num_classes)
val_info = str(confmat)
print(val_info)
# write into txt
with open(results_file, "a") as f:
# 记录每个epoch对应的train_loss、lr以及验证集各指标
train_info = f"[epoch: {epoch}]\n" \
f"train_loss: {mean_loss:.4f}\n" \
f"lr: {lr:.6f}\n"
f.write(train_info + val_info + "\n\n")
我跟你一样的问题,请问楼主解决了吗