用pytorch定义的逻辑回归,在迭代时损失值越来越大是什么情况

for epoch in range(Epochs):
    q =torch.Tensor(q).float()
    loss = loss_fn(X, U, p, q)
    optim.zero_grad()
    loss.backward()
    optim.step()
    print('*'*10)
    print('epoch {}'.format(epoch+1))  #误差
    print('loss is {:.4f}'.format(loss))

X和U为自变量,p和q为对应的y值
        **********
epoch 1
loss is 52.6023
**********
epoch 2
loss is 52.6023
**********
epoch 3
loss is 52.6023
**********
epoch 4
loss is 52.6022
**********
epoch 5
loss is 52.6022
**********
epoch 6
loss is 52.6122
**********
epoch 7
loss is 52.6021
**********
epoch 8
loss is 52.6021
**********
epoch 9
loss is 52.6121
**********
epoch 10
loss is 52.6120
**********
epoch 11
loss is 52.6120
**********
epoch 12
loss is 52.6219
**********
epoch 13
loss is 52.6218
**********
epoch 14
loss is 52.6118
**********
epoch 15
loss is 52.6217
**********
epoch 16
loss is 52.6117
**********
epoch 17
loss is 52.6016
**********
epoch 18
loss is 52.5715
**********
epoch 19
loss is 52.5615
**********
epoch 20
loss is 52.5414
希望路过的各位大佬能给小白一些建议,蟹蟹~

这很正常,好比爬山,在你爬上最高峰以前,并非一直都是上坡。一个道理,在调整权重的时候,loss局部上升是很正常的。如果一个优化算法loss始终下降,反倒说明算法不好,容易陷入局部最优跳不出来。

我只能大概给你说下什么原因,基本上这种深度学习框架内嵌的loss是不会有问题的,那么loss出现不收敛的现象只能从其他方向找原因,具体是模型原因还是数据原因,这个就需要自己去慢慢摸索了。