Pytorch遇到的坑:MSEloss的输入问题

Source

问题

return _VF.broadcast_tensors(tensors)  # type: ignore
RuntimeError: The size of tensor a (96) must match the size of tensor b (336) at non-singleton dimension 1

loss(output,target)

当我的输入为(B,C,336)target为(B,C,96)的时候不会出现报错

而当我的输入为(B,C,96)target为(B,C,336)的时候会出现报错

解答

通过打印报告查到问题是loss的输入维度问题,请参考以下链接

  File "/home/fight/Desktop/LTSF-Linear-main/exp/exp_main.py", line 181, in train
    loss = criterion(outputs, batch_y)
  File "/home/fight/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/fight/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 528, in forward
    return F.mse_loss(input, target, reduction=self.reduction)
  File "/home/fight/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/functional.py", line 2928, in mse_loss
    expanded_input, expanded_target = torch.broadcast_tensors(input, target)
  File "/home/fight/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/functional.py", line 74, in broadcast_tensors
    return _VF.broadcast_tensors(tensors)  # type: ignore
RuntimeError: The size of tensor a (96) must match the size of tensor b (336) at non-singleton dimension 1

Pytorch遇到的坑:为什么模型训练时,L1loss损失无法下降?_一个菜鸟的奋斗的博客-CSDN博客_pytorch损失函数不下降