论文复现:【PyTorch】(C3D)Learning Spatiotemporal Features with 3D Convolutional Networks

Source

目录

写在前面

复现论文 & 代码

遇到的问题

1、学习率的变化 & lr_scheduler 的使用

2、UCF101 Dataloader 的遍历


 

写在前面

本次复现使用的是 PyTorch 1.7 里对视频的处理方法来进行训练,详情参考:https://blog.csdn.net/qq_36627158/article/details/113791050

 


 

复现论文 & 代码

GIthub:https://github.com/BizhuWu/C3D_PyTorch (如果可以的话,给个小星星嘛~)

paper:Learning Spatiotemporal Features with 3D Convolutional Networks

 


 

遇到的问题

1、学习率的变化 & lr_scheduler 的使用

参考

scheduler = torch.optim.lr_scheduler.xxx()
for epoch in range(epochs):
    train(...)
    optimizer.step()
    scheduler.step()

但我遇到了一个问题

import torch
from torchvision.models import resnet18
net = resnet18()
optimizer = torch.optim.SGD(net.parameters(), 0.1)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[3, 6, 9], gamma=0.1)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 3, gamma=0.1)
for i in range(10):
    print(i, scheduler.get_lr())
    scheduler.step()

输出有问题

0 [0.1]
1 [0.1]
2 [0.1]
3 [0.0010000000000000002]
4 [0.010000000000000002]
5 [0.010000000000000002]
6 [0.00010000000000000003]
7 [0.0010000000000000002]
8 [0.0010000000000000002]
9 [1.0000000000000004e-05]

解决方案

 

 

2、UCF101 Dataloader 的遍历

 在遍历 dataloader 时

for i, (v, a, l) in enumerate(dataloader):  # <- RunTimeError occurs here
    pass

报出如下错误

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 2 and 1 in dimension 1 at /opt/conda/conda-bld/pytorch_1579022060824/work/aten/src/TH/generic/THTensor.cpp:612

or

RuntimeError: stack expects each tensor to be equal size, but got [2, 28800] at entry 0 and [1, 28800] at entry 6

原因分析 & 参考解决方案:https://github.com/pytorch/vision/issues/2265

从错误信息来看,应该是 audio 出问题了

所以建议自己写一个 collate_fn,把返回的 audio 过滤掉。

collate_fn 用法参考

def custom_collate(batch):
    filtered_batch = []
    for video, _, label in batch:
        filtered_batch.append((video, label))
    torch.utils.data.dataloader.default_collate(filtered_batch)

结合加载 UCF101 数据集:

def custom_collate(batch):
    filtered_batch = []
    for video, _, label in batch:
        filtered_batch.append((video, label))
    return torch.utils.data.dataloader.default_collate(filtered_batch)


trainset = datasets.UCF101(
    root='data/UCF101/UCF-101',
    annotation_path='data/UCF101TrainTestSplits-RecognitionTask/ucfTrainTestlist',
    frames_per_clip=FRAME_LENGTH,
    num_workers=0,
    transform=transform,
)


trainset_loader = DataLoader(
    trainset,
    batch_size=TRAIN_BATCH_SIZE,
    shuffle=True,
    num_workers=0,
    collate_fn=custom_collate
)