pytorch报错,跑完第一个epoch报这个错误,求大佬指导

C:/w/1/s/tmp_conda_3.7_100118/conda/conda-bld/pytorch_1579082551706/work/aten/src/THC/THCTensorScatterGather.cu:100: block: [0,0,0], thread: [2,0,0] Assertion `indexValue >= 0 && indexValue < src.sizes[dim]` failed.
C:/w/1/s/tmp_conda_3.7_100118/conda/conda-bld/pytorch_1579082551706/work/aten/src/THC/THCTensorScatterGather.cu:100: block: [0,0,0], thread: [3,0,0] Assertion `indexValue >= 0 && indexValue < src.sizes[dim]` failed.
C:/w/1/s/tmp_conda_3.7_100118/conda/conda-bld/pytorch_1579082551706/work/aten/src/THC/THCTensorScatterGather.cu:100: block: [0,0,0], thread: [9,0,0] Assertion `indexValue >= 0 && indexValue < src.sizes[dim]` failed.
C:/w/1/s/tmp_conda_3.7_100118/conda/conda-bld/pytorch_1579082551706/work/aten/src/THC/THCTensorScatterGather.cu:100: block: [0,0,0], thread: [12,0,0] Assertion `indexValue >= 0 && indexValue < src.sizes[dim]` failed.
Traceback (most recent call last):
  File "D:/桌面/代码/resnet18_train.py", line 117, in <module>
    loss.backward()
  File "D:\ProgramData\Anaconda3\lib\site-packages\torch\tensor.py", line 195, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "D:\ProgramData\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 99, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: CUDA error: device-side assert triggered

是不是标签有负数值  标签是[0,0,0,1]这样的数组或者0,1,2,3这样的形式哦

https://github.com/amdegroot/ssd.pytorch/issues/231

楼主解决了嘛,我也遇到了这个问题!