请教下Mean teacher相关的问题

问题遇到的现象和发生背景

您好,我在Github上下载的一个简单mean teacher模型,它自带的模型是做2分类,我用了400cat+400dog的数据,其中10%为标记数据,90%为未标记数据。现在的问题是在模型训练过程中,consistency_loss和classify_loss都不收敛,呈现一种波动状态。到训练后期开始出现对标签数据的过拟合问题了。请问这种现象是什么问题?是数据过少吗?若是的话,像这种任务,需要多少数据?其中有多少比例为标签数据?
在半监督领域我是个刚入门的,劳您多多提点。
log日志格式如下:labeled: [0.488 | 0.594 (0.492, 0.750)], unlabeled: [0.519 | 0.616 (0.513, 0.771)], unseen: [0.350 | 0.500 (0.406, 0.650)]
其中,labeled为从labeled数据提取的验证集,unlabeled为从unlabeled数据提取的验证集,unseen为从未参与训练的数据中提取。
[0.488 | 0.594 (0.492, 0.750)]:分别为:[Accuarcy | F1 (P, R)]

问题相关代码,请勿粘贴截图
class MobileNet(nn.Module):
    def __init__(self):
        super(MobileNet, self).__init__()

        def conv_bn(inp, oup, stride):
            return nn.Sequential(
                nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
                nn.BatchNorm2d(oup),
                nn.ReLU(inplace=True)
            )

        def conv_dw(inp, oup, stride):
            return nn.Sequential(
                nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
                nn.BatchNorm2d(inp),
                nn.ReLU(inplace=True),

                nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
                nn.BatchNorm2d(oup),
                nn.ReLU(inplace=True),
            )

        self.model = nn.Sequential(
            conv_bn(3, 32, 2),
            conv_dw(32, 64, 1),
            conv_dw(64, 96, 2),
            conv_dw(96, 96, 1),
            conv_dw(96, 128, 2),
            conv_dw(128, 128, 1),
            conv_dw(128, 256, 2),
            conv_dw(256, 256, 1),
            conv_dw(256, 512, 1),
            nn.AvgPool2d(2),
        )
        self.fc = nn.Linear(512, 2)

    def forward(self, x):
        x = self.model(x)
        # print x.shape
        x = x.view(-1, 512)
        x = self.fc(x)
        return x

# 模型初始化(student、teacher)
net_student = MobileNet().cuda()
net_teacher = MobileNet().cuda()

for param in net_teacher.parameters():
    param.detach_()

if os.path.isfile('white.pt'):
    net_student.load_state_dict(torch.load('white.pt'))

train_data = mydata(trainPath, './', is_train=True)
weights = make_weights_for_balanced_classes(train_data.labels, 3)
weights = torch.DoubleTensor(weights)
sampler = torch.utils.data.WeightedRandomSampler(weights, dataCount)  # 采样方法,https://zhuanlan.zhihu.com/p/100280685?utm_source=qq
train_dataloader = DataLoader(train_data, batch_size=min_batch_size, shuffle=True)
# 标记数据验证集
labeled_valid_data = mydata(labeled_validPath, './', is_train=False)
labeled_valid_dataloader = DataLoader(labeled_valid_data, batch_size=min_batch_size, shuffle=True)
# 无标记数据验证集
unlabeled_valid_data = mydata(unlabeled_validPath, './', is_train=False)
unlabeled_valid_dataloader = DataLoader(unlabeled_valid_data, batch_size=min_batch_size, shuffle=True)
# 未见数据验证集
unseen_valid_data = mydata(unseen_validPath, './', is_train=False)
unseen_valid_dataloader = DataLoader(unseen_valid_data, batch_size=min_batch_size, shuffle=True)


classify_loss_function = torch.nn.CrossEntropyLoss(reduction='sum', ignore_index=-1).cuda()
optimizer = torch.optim.SGD(net_student.parameters(), lr=0.001, momentum=0.9)

globals_step = 0
for epoch in range(10000):
    globals_classify_loss = 0
    globals_consistency_loss = 0
    net_student.train()
    start = time.time()
    end = 0

    for index, (x, y) in enumerate(train_dataloader):
        optimizer.zero_grad()  #

        x_student = autograd.Variable(x[0]).cuda()
        y = autograd.Variable(y).cuda()
        predict_student = net_student(x_student)

        classify_loss = classify_loss_function(predict_student, y) / min_batch_size
        sum_loss = classify_loss

        # x_teacher = autograd.Variable(x[1], volatile=True).cuda()


        with torch.no_grad():
            x_teacher = autograd.Variable(x[1]).cuda()


        predict_teacher = net_teacher(x_teacher)
        ema_logit = autograd.Variable(predict_teacher.detach().data, requires_grad=False)
        consistency_loss = softmax_mse_loss(predict_student, ema_logit) / min_batch_size
        consistency_weight = 1
        sum_loss += consistency_weight * consistency_loss
        globals_consistency_loss += consistency_loss.item()

        sum_loss.backward()
        optimizer.step()
        alpha = min(1 - 1 / (globals_step + 1), 0.99)
        update_ema_variables(net_student, net_teacher, alpha)

        globals_classify_loss += classify_loss.item()
        globals_step += 1

    if epoch % 5 != 0:
        continue
    # 标记数据测试
    net_student.eval()
    labeled_valid_correct = 0
    labeled_valid_total = 0
    for images, labels in labeled_valid_dataloader:
        with torch.no_grad():
            valid_input = autograd.Variable(images).cuda()
        outputs = net_student(valid_input)
        # print outputs.shape
        _, predicted = torch.max(outputs.data, 1)
        labeled_valid_total += labels.size(0)
        labeled_valid_correct += (predicted.cpu() == labels).sum()
    labeled_valid_accuracy = labeled_valid_correct.float()/labeled_valid_total
    # 无标记数据测试
    net_student.eval()
    unlabeled_valid_correct = 0
    unlabeled_valid_total = 0
    for images, labels in unlabeled_valid_dataloader:
        with torch.no_grad():
            valid_input = autograd.Variable(images).cuda()
        outputs = net_student(valid_input)
        # print outputs.shape
        _, predicted = torch.max(outputs.data, 1)
        unlabeled_valid_total += labels.size(0)
        unlabeled_valid_correct += (predicted.cpu() == labels).sum()
    unlabeled_valid_accuracy = unlabeled_valid_correct.float()/unlabeled_valid_total
    # 未见数据测试
    net_student.eval()
    unseen_valid_correct = 0
    unseen_valid_total = 0
    for images, labels in unseen_valid_dataloader:
        with torch.no_grad():
            valid_input = autograd.Variable(images).cuda()
        outputs = net_student(valid_input)
        # print outputs.shape
        _, predicted = torch.max(outputs.data, 1)
        unseen_valid_total += labels.size(0)
        unseen_valid_correct += (predicted.cpu() == labels).sum()
    unseen_valid_accuracy = unseen_valid_correct.float()/unseen_valid_total

    fmtContent = "epoch:{}, time:{}, labeled_acc: {}, unlabeled_acc: {}, unseen_acc: {}, consistency loss:{}, classify loss: {}"
    logContent = fmtContent.format(epoch+1, format(time.time() - start, '.0f'),
                                   format(100 * labeled_valid_accuracy, '.1f'),
                                   format(100 * unlabeled_valid_accuracy, '.1f'),
                                   format(100 * unseen_valid_accuracy, '.1f'),
                                   format(globals_consistency_loss, '.4f'),
                                   format(globals_classify_loss, '.4f'))
    print(logContent)
    torch.save(net_student.state_dict(), 'white.pt') 

运行结果及报错内容
epoch:1, time:15, labeled: [0.488 | 0.594 (0.492, 0.750)], unlabeled: [0.519 | 0.616 (0.513, 0.771)], unseen: [0.350 | 0.500 (0.406, 0.650)], consistency loss:0.3052, classify loss: 3.8403
epoch:6, time:14, labeled: [0.538 | 0.584 (0.531, 0.650)], unlabeled: [0.538 | 0.601 (0.529, 0.694)], unseen: [0.575 | 0.638 (0.556, 0.750)], consistency loss:0.3009, classify loss: 3.7002
epoch:11, time:14, labeled: [0.525 | 0.568 (0.521, 0.625)], unlabeled: [0.521 | 0.573 (0.517, 0.644)], unseen: [0.575 | 0.585 (0.571, 0.600)], consistency loss:0.2735, classify loss: 3.7976
epoch:16, time:14, labeled: [0.562 | 0.598 (0.553, 0.650)], unlabeled: [0.525 | 0.565 (0.521, 0.618)], unseen: [0.450 | 0.522 (0.462, 0.600)], consistency loss:0.2451, classify loss: 3.6522
epoch:21, time:14, labeled: [0.550 | 0.617 (0.537, 0.725)], unlabeled: [0.547 | 0.572 (0.542, 0.606)], unseen: [0.500 | 0.545 (0.500, 0.600)], consistency loss:0.2428, classify loss: 3.7909
epoch:26, time:14, labeled: [0.625 | 0.681 (0.593, 0.800)], unlabeled: [0.522 | 0.555 (0.519, 0.597)], unseen: [0.475 | 0.553 (0.481, 0.650)], consistency loss:0.2509, classify loss: 3.7309
epoch:31, time:14, labeled: [0.550 | 0.609 (0.538, 0.700)], unlabeled: [0.519 | 0.551 (0.517, 0.591)], unseen: [0.475 | 0.533 (0.480, 0.600)], consistency loss:0.2427, classify loss: 3.6363
epoch:36, time:14, labeled: [0.575 | 0.585 (0.571, 0.600)], unlabeled: [0.569 | 0.590 (0.563, 0.621)], unseen: [0.500 | 0.545 (0.500, 0.600)], consistency loss:0.2344, classify loss: 3.7218
epoch:41, time:14, labeled: [0.587 | 0.593 (0.585, 0.600)], unlabeled: [0.532 | 0.520 (0.534, 0.506)], unseen: [0.450 | 0.421 (0.444, 0.400)], consistency loss:0.2545, classify loss: 3.6390
epoch:46, time:14, labeled: [0.575 | 0.595 (0.568, 0.625)], unlabeled: [0.557 | 0.594 (0.549, 0.647)], unseen: [0.375 | 0.390 (0.381, 0.400)], consistency loss:0.2133, classify loss: 3.6582
epoch:51, time:14, labeled: [0.550 | 0.538 (0.553, 0.525)], unlabeled: [0.528 | 0.529 (0.528, 0.529)], unseen: [0.475 | 0.400 (0.467, 0.350)], consistency loss:0.2194, classify loss: 3.6781
epoch:56, time:14, labeled: [0.550 | 0.500 (0.562, 0.450)], unlabeled: [0.529 | 0.508 (0.532, 0.485)], unseen: [0.475 | 0.364 (0.462, 0.300)], consistency loss:0.2026, classify loss: 3.6289
epoch:61, time:14, labeled: [0.587 | 0.535 (0.613, 0.475)], unlabeled: [0.553 | 0.534 (0.558, 0.512)], unseen: [0.475 | 0.364 (0.462, 0.300)], consistency loss:0.2204, classify loss: 3.5899
epoch:66, time:14, labeled: [0.562 | 0.545 (0.568, 0.525)], unlabeled: [0.532 | 0.512 (0.535, 0.491)], unseen: [0.425 | 0.410 (0.421, 0.400)], consistency loss:0.2036, classify loss: 3.5391
epoch:71, time:14, labeled: [0.587 | 0.548 (0.606, 0.500)], unlabeled: [0.540 | 0.519 (0.543, 0.497)], unseen: [0.500 | 0.375 (0.500, 0.300)], consistency loss:0.1979, classify loss: 3.6498
epoch:76, time:14, labeled: [0.562 | 0.507 (0.581, 0.450)], unlabeled: [0.535 | 0.518 (0.538, 0.500)], unseen: [0.575 | 0.541 (0.588, 0.500)], consistency loss:0.2103, classify loss: 3.5881
epoch:81, time:14, labeled: [0.600 | 0.568 (0.618, 0.525)], unlabeled: [0.544 | 0.516 (0.550, 0.485)], unseen: [0.475 | 0.400 (0.467, 0.350)], consistency loss:0.2287, classify loss: 3.5967
epoch:86, time:14, labeled: [0.562 | 0.545 (0.568, 0.525)], unlabeled: [0.538 | 0.530 (0.540, 0.521)], unseen: [0.550 | 0.500 (0.562, 0.450)], consistency loss:0.2026, classify loss: 3.5866
epoch:91, time:14, labeled: [0.562 | 0.557 (0.564, 0.550)], unlabeled: [0.550 | 0.536 (0.553, 0.521)], unseen: [0.500 | 0.444 (0.500, 0.400)], consistency loss:0.2232, classify loss: 3.6391
epoch:96, time:14, labeled: [0.512 | 0.381 (0.522, 0.300)], unlabeled: [0.537 | 0.476 (0.548, 0.421)], unseen: [0.525 | 0.345 (0.556, 0.250)], consistency loss:0.2234, classify loss: 3.5710
epoch:101, time:14, labeled: [0.550 | 0.514 (0.559, 0.475)], unlabeled: [0.534 | 0.493 (0.540, 0.453)], unseen: [0.575 | 0.414 (0.667, 0.300)], consistency loss:0.2022, classify loss: 3.6959
epoch:106, time:14, labeled: [0.562 | 0.493 (0.586, 0.425)], unlabeled: [0.543 | 0.502 (0.551, 0.462)], unseen: [0.500 | 0.286 (0.500, 0.200)], consistency loss:0.2285, classify loss: 3.6703
epoch:111, time:14, labeled: [0.613 | 0.563 (0.645, 0.500)], unlabeled: [0.532 | 0.511 (0.535, 0.488)], unseen: [0.525 | 0.387 (0.545, 0.300)], consistency loss:0.2043, classify loss: 3.6291
epoch:116, time:14, labeled: [0.637 | 0.613 (0.657, 0.575)], unlabeled: [0.535 | 0.511 (0.539, 0.485)], unseen: [0.575 | 0.452 (0.636, 0.350)], consistency loss:0.2219, classify loss: 3.5004
epoch:121, time:14, labeled: [0.538 | 0.479 (0.548, 0.425)], unlabeled: [0.553 | 0.519 (0.562, 0.482)], unseen: [0.550 | 0.438 (0.583, 0.350)], consistency loss:0.2062, classify loss: 3.5431
epoch:126, time:14, labeled: [0.525 | 0.500 (0.528, 0.475)], unlabeled: [0.550 | 0.522 (0.557, 0.491)], unseen: [0.550 | 0.438 (0.583, 0.350)], consistency loss:0.2271, classify loss: 3.6269
epoch:131, time:14, labeled: [0.600 | 0.500 (0.667, 0.400)], unlabeled: [0.556 | 0.492 (0.575, 0.429)], unseen: [0.600 | 0.467 (0.700, 0.350)], consistency loss:0.2220, classify loss: 3.4943
epoch:136, time:14, labeled: [0.613 | 0.508 (0.696, 0.400)], unlabeled: [0.557 | 0.507 (0.572, 0.456)], unseen: [0.600 | 0.500 (0.667, 0.400)], consistency loss:0.2317, classify loss: 3.5881
epoch:141, time:14, labeled: [0.538 | 0.431 (0.560, 0.350)], unlabeled: [0.551 | 0.512 (0.561, 0.471)], unseen: [0.525 | 0.387 (0.545, 0.300)], consistency loss:0.2113, classify loss: 3.5978
epoch:146, time:14, labeled: [0.625 | 0.559 (0.679, 0.475)], unlabeled: [0.563 | 0.492 (0.588, 0.424)], unseen: [0.550 | 0.438 (0.583, 0.350)], consistency loss:0.2048, classify loss: 3.5434
epoch:151, time:14, labeled: [0.550 | 0.514 (0.559, 0.475)], unlabeled: [0.581 | 0.551 (0.593, 0.515)], unseen: [0.550 | 0.438 (0.583, 0.350)], consistency loss:0.2166, classify loss: 3.5525
epoch:156, time:14, labeled: [0.600 | 0.529 (0.643, 0.450)], unlabeled: [0.569 | 0.512 (0.590, 0.453)], unseen: [0.525 | 0.345 (0.556, 0.250)], consistency loss:0.2067, classify loss: 3.5426
epoch:161, time:14, labeled: [0.600 | 0.467 (0.700, 0.350)], unlabeled: [0.571 | 0.513 (0.592, 0.453)], unseen: [0.600 | 0.467 (0.700, 0.350)], consistency loss:0.2095, classify loss: 3.4690
epoch:166, time:14, labeled: [0.650 | 0.588 (0.714, 0.500)], unlabeled: [0.572 | 0.542 (0.583, 0.506)], unseen: [0.675 | 0.606 (0.769, 0.500)], consistency loss:0.2121, classify loss: 3.5925
epoch:171, time:14, labeled: [0.637 | 0.554 (0.720, 0.450)], unlabeled: [0.585 | 0.519 (0.618, 0.447)], unseen: [0.600 | 0.500 (0.667, 0.400)], consistency loss:0.2074, classify loss: 3.4678
epoch:176, time:14, labeled: [0.663 | 0.542 (0.842, 0.400)], unlabeled: [0.575 | 0.503 (0.606, 0.429)], unseen: [0.625 | 0.545 (0.692, 0.450)], consistency loss:0.1968, classify loss: 3.5354
epoch:181, time:14, labeled: [0.637 | 0.567 (0.704, 0.475)], unlabeled: [0.554 | 0.507 (0.567, 0.459)], unseen: [0.625 | 0.545 (0.692, 0.450)], consistency loss:0.2219, classify loss: 3.6012
epoch:186, time:14, labeled: [0.625 | 0.531 (0.708, 0.425)], unlabeled: [0.569 | 0.539 (0.580, 0.503)], unseen: [0.475 | 0.400 (0.467, 0.350)], consistency loss:0.2044, classify loss: 3.4680
epoch:191, time:14, labeled: [0.600 | 0.515 (0.654, 0.425)], unlabeled: [0.563 | 0.485 (0.591, 0.412)], unseen: [0.600 | 0.500 (0.667, 0.400)], consistency loss:0.2166, classify loss: 3.4394
epoch:196, time:14, labeled: [0.575 | 0.469 (0.625, 0.375)], unlabeled: [0.557 | 0.446 (0.596, 0.356)], unseen: [0.500 | 0.412 (0.500, 0.350)], consistency loss:0.2037, classify loss: 3.4825
epoch:201, time:14, labeled: [0.613 | 0.537 (0.667, 0.450)], unlabeled: [0.574 | 0.508 (0.600, 0.441)], unseen: [0.550 | 0.471 (0.571, 0.400)], consistency loss:0.2130, classify loss: 3.4855
epoch:206, time:14, labeled: [0.637 | 0.567 (0.704, 0.475)], unlabeled: [0.572 | 0.496 (0.603, 0.421)], unseen: [0.575 | 0.452 (0.636, 0.350)], consistency loss:0.2037, classify loss: 3.3591
epoch:211, time:14, labeled: [0.625 | 0.531 (0.708, 0.425)], unlabeled: [0.557 | 0.450 (0.594, 0.362)], unseen: [0.650 | 0.563 (0.750, 0.450)], consistency loss:0.2135, classify loss: 3.3571
epoch:216, time:14, labeled: [0.675 | 0.594 (0.792, 0.475)], unlabeled: [0.576 | 0.498 (0.611, 0.421)], unseen: [0.550 | 0.438 (0.583, 0.350)], consistency loss:0.2491, classify loss: 3.4619
epoch:221, time:14, labeled: [0.688 | 0.590 (0.857, 0.450)], unlabeled: [0.569 | 0.464 (0.614, 0.374)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2109, classify loss: 3.4393
epoch:226, time:14, labeled: [0.700 | 0.613 (0.864, 0.475)], unlabeled: [0.563 | 0.434 (0.616, 0.335)], unseen: [0.625 | 0.516 (0.727, 0.400)], consistency loss:0.2151, classify loss: 3.4807
epoch:231, time:14, labeled: [0.663 | 0.571 (0.783, 0.450)], unlabeled: [0.560 | 0.476 (0.589, 0.400)], unseen: [0.575 | 0.452 (0.636, 0.350)], consistency loss:0.2199, classify loss: 3.5555
epoch:236, time:14, labeled: [0.688 | 0.590 (0.857, 0.450)], unlabeled: [0.557 | 0.382 (0.633, 0.274)], unseen: [0.625 | 0.444 (0.857, 0.300)], consistency loss:0.2196, classify loss: 3.4587
epoch:241, time:14, labeled: [0.675 | 0.567 (0.850, 0.425)], unlabeled: [0.568 | 0.454 (0.616, 0.359)], unseen: [0.675 | 0.581 (0.818, 0.450)], consistency loss:0.2347, classify loss: 3.4056
epoch:246, time:14, labeled: [0.675 | 0.567 (0.850, 0.425)], unlabeled: [0.559 | 0.430 (0.608, 0.332)], unseen: [0.550 | 0.357 (0.625, 0.250)], consistency loss:0.2244, classify loss: 3.4264
epoch:251, time:14, labeled: [0.600 | 0.484 (0.682, 0.375)], unlabeled: [0.566 | 0.447 (0.617, 0.350)], unseen: [0.525 | 0.345 (0.556, 0.250)], consistency loss:0.2188, classify loss: 3.3822
epoch:256, time:14, labeled: [0.688 | 0.590 (0.857, 0.450)], unlabeled: [0.557 | 0.399 (0.621, 0.294)], unseen: [0.575 | 0.414 (0.667, 0.300)], consistency loss:0.2432, classify loss: 3.4697
epoch:261, time:14, labeled: [0.663 | 0.557 (0.810, 0.425)], unlabeled: [0.563 | 0.423 (0.623, 0.321)], unseen: [0.550 | 0.400 (0.600, 0.300)], consistency loss:0.2099, classify loss: 3.4132
epoch:266, time:14, labeled: [0.637 | 0.540 (0.739, 0.425)], unlabeled: [0.581 | 0.481 (0.632, 0.388)], unseen: [0.600 | 0.500 (0.667, 0.400)], consistency loss:0.2280, classify loss: 3.5044
epoch:271, time:14, labeled: [0.650 | 0.533 (0.800, 0.400)], unlabeled: [0.579 | 0.452 (0.648, 0.347)], unseen: [0.600 | 0.467 (0.700, 0.350)], consistency loss:0.2633, classify loss: 3.4395
epoch:276, time:14, labeled: [0.675 | 0.567 (0.850, 0.425)], unlabeled: [0.572 | 0.421 (0.650, 0.312)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2338, classify loss: 3.2912
epoch:281, time:14, labeled: [0.650 | 0.533 (0.800, 0.400)], unlabeled: [0.575 | 0.447 (0.639, 0.344)], unseen: [0.600 | 0.467 (0.700, 0.350)], consistency loss:0.2239, classify loss: 3.3475
epoch:286, time:14, labeled: [0.650 | 0.517 (0.833, 0.375)], unlabeled: [0.574 | 0.429 (0.649, 0.321)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2327, classify loss: 3.4151
epoch:291, time:14, labeled: [0.675 | 0.552 (0.889, 0.400)], unlabeled: [0.563 | 0.393 (0.644, 0.282)], unseen: [0.575 | 0.414 (0.667, 0.300)], consistency loss:0.2385, classify loss: 3.2656
epoch:296, time:14, labeled: [0.663 | 0.542 (0.842, 0.400)], unlabeled: [0.579 | 0.435 (0.663, 0.324)], unseen: [0.625 | 0.516 (0.727, 0.400)], consistency loss:0.2446, classify loss: 3.3134
epoch:301, time:14, labeled: [0.688 | 0.576 (0.895, 0.425)], unlabeled: [0.568 | 0.412 (0.644, 0.303)], unseen: [0.600 | 0.467 (0.700, 0.350)], consistency loss:0.2513, classify loss: 3.3971
epoch:306, time:14, labeled: [0.650 | 0.533 (0.800, 0.400)], unlabeled: [0.546 | 0.358 (0.610, 0.253)], unseen: [0.600 | 0.429 (0.750, 0.300)], consistency loss:0.2451, classify loss: 3.3876
epoch:311, time:14, labeled: [0.688 | 0.590 (0.857, 0.450)], unlabeled: [0.568 | 0.432 (0.629, 0.329)], unseen: [0.625 | 0.516 (0.727, 0.400)], consistency loss:0.2361, classify loss: 3.5287
epoch:316, time:14, labeled: [0.675 | 0.552 (0.889, 0.400)], unlabeled: [0.568 | 0.402 (0.651, 0.291)], unseen: [0.675 | 0.552 (0.889, 0.400)], consistency loss:0.2473, classify loss: 3.3676
epoch:321, time:14, labeled: [0.650 | 0.533 (0.800, 0.400)], unlabeled: [0.565 | 0.383 (0.657, 0.271)], unseen: [0.600 | 0.467 (0.700, 0.350)], consistency loss:0.2344, classify loss: 3.3245
epoch:326, time:14, labeled: [0.688 | 0.576 (0.895, 0.425)], unlabeled: [0.554 | 0.367 (0.633, 0.259)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2357, classify loss: 3.3995
epoch:331, time:14, labeled: [0.675 | 0.567 (0.850, 0.425)], unlabeled: [0.562 | 0.382 (0.648, 0.271)], unseen: [0.650 | 0.533 (0.800, 0.400)], consistency loss:0.2419, classify loss: 3.4256
epoch:336, time:14, labeled: [0.663 | 0.526 (0.882, 0.375)], unlabeled: [0.551 | 0.352 (0.634, 0.244)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2387, classify loss: 3.3730
epoch:341, time:14, labeled: [0.675 | 0.581 (0.818, 0.450)], unlabeled: [0.566 | 0.409 (0.642, 0.300)], unseen: [0.675 | 0.552 (0.889, 0.400)], consistency loss:0.2473, classify loss: 3.2583
epoch:346, time:14, labeled: [0.650 | 0.500 (0.875, 0.350)], unlabeled: [0.549 | 0.307 (0.660, 0.200)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2446, classify loss: 3.3375
epoch:351, time:14, labeled: [0.712 | 0.623 (0.905, 0.475)], unlabeled: [0.559 | 0.348 (0.667, 0.235)], unseen: [0.600 | 0.429 (0.750, 0.300)], consistency loss:0.2361, classify loss: 3.3713
epoch:356, time:14, labeled: [0.663 | 0.542 (0.842, 0.400)], unlabeled: [0.549 | 0.337 (0.634, 0.229)], unseen: [0.650 | 0.500 (0.875, 0.350)], consistency loss:0.2482, classify loss: 3.4326
epoch:361, time:14, labeled: [0.688 | 0.576 (0.895, 0.425)], unlabeled: [0.553 | 0.350 (0.641, 0.241)], unseen: [0.650 | 0.533 (0.800, 0.400)], consistency loss:0.2332, classify loss: 3.5220
epoch:366, time:14, labeled: [0.675 | 0.567 (0.850, 0.425)], unlabeled: [0.556 | 0.349 (0.653, 0.238)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2477, classify loss: 3.2959
epoch:371, time:14, labeled: [0.663 | 0.542 (0.842, 0.400)], unlabeled: [0.562 | 0.382 (0.648, 0.271)], unseen: [0.650 | 0.533 (0.800, 0.400)], consistency loss:0.2529, classify loss: 3.3959
epoch:376, time:14, labeled: [0.650 | 0.517 (0.833, 0.375)], unlabeled: [0.549 | 0.328 (0.641, 0.221)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2297, classify loss: 3.3433
epoch:381, time:14, labeled: [0.663 | 0.571 (0.783, 0.450)], unlabeled: [0.562 | 0.379 (0.650, 0.268)], unseen: [0.650 | 0.533 (0.800, 0.400)], consistency loss:0.2445, classify loss: 3.3137
epoch:386, time:14, labeled: [0.725 | 0.645 (0.909, 0.500)], unlabeled: [0.544 | 0.349 (0.610, 0.244)], unseen: [0.625 | 0.516 (0.727, 0.400)], consistency loss:0.2405, classify loss: 3.2329
epoch:391, time:14, labeled: [0.725 | 0.645 (0.909, 0.500)], unlabeled: [0.571 | 0.411 (0.654, 0.300)], unseen: [0.625 | 0.516 (0.727, 0.400)], consistency loss:0.2596, classify loss: 3.1925
epoch:396, time:14, labeled: [0.675 | 0.552 (0.889, 0.400)], unlabeled: [0.541 | 0.328 (0.613, 0.224)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2826, classify loss: 3.2024
epoch:401, time:14, labeled: [0.688 | 0.576 (0.895, 0.425)], unlabeled: [0.554 | 0.331 (0.664, 0.221)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2189, classify loss: 3.1378
epoch:406, time:14, labeled: [0.675 | 0.567 (0.850, 0.425)], unlabeled: [0.565 | 0.373 (0.667, 0.259)], unseen: [0.650 | 0.563 (0.750, 0.450)], consistency loss:0.2166, classify loss: 3.3102
epoch:411, time:14, labeled: [0.700 | 0.600 (0.900, 0.450)], unlabeled: [0.549 | 0.337 (0.634, 0.229)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2423, classify loss: 3.3352
epoch:416, time:14, labeled: [0.712 | 0.635 (0.870, 0.500)], unlabeled: [0.550 | 0.354 (0.627, 0.247)], unseen: [0.625 | 0.516 (0.727, 0.400)], consistency loss:0.2601, classify loss: 3.1962
epoch:421, time:14, labeled: [0.675 | 0.552 (0.889, 0.400)], unlabeled: [0.541 | 0.310 (0.625, 0.206)], unseen: [0.600 | 0.467 (0.700, 0.350)], consistency loss:0.2432, classify loss: 3.4191
epoch:426, time:14, labeled: [0.675 | 0.552 (0.889, 0.400)], unlabeled: [0.553 | 0.342 (0.648, 0.232)], unseen: [0.650 | 0.563 (0.750, 0.450)], consistency loss:0.2569, classify loss: 3.4070
epoch:431, time:14, labeled: [0.688 | 0.576 (0.895, 0.425)], unlabeled: [0.549 | 0.328 (0.641, 0.221)], unseen: [0.650 | 0.500 (0.875, 0.350)], consistency loss:0.2426, classify loss: 3.1688
epoch:436, time:14, labeled: [0.688 | 0.590 (0.857, 0.450)], unlabeled: [0.537 | 0.298 (0.615, 0.197)], unseen: [0.650 | 0.500 (0.875, 0.350)], consistency loss:0.2409, classify loss: 3.2159
epoch:441, time:14, labeled: [0.688 | 0.590 (0.857, 0.450)], unlabeled: [0.551 | 0.361 (0.628, 0.253)], unseen: [0.700 | 0.600 (0.900, 0.450)], consistency loss:0.2335, classify loss: 3.2771
epoch:446, time:14, labeled: [0.663 | 0.526 (0.882, 0.375)], unlabeled: [0.537 | 0.279 (0.629, 0.179)], unseen: [0.600 | 0.429 (0.750, 0.300)], consistency loss:0.2717, classify loss: 3.2846
epoch:451, time:14, labeled: [0.663 | 0.542 (0.842, 0.400)], unlabeled: [0.549 | 0.343 (0.630, 0.235)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2414, classify loss: 3.2209
epoch:456, time:14, labeled: [0.675 | 0.552 (0.889, 0.400)], unlabeled: [0.550 | 0.317 (0.657, 0.209)], unseen: [0.600 | 0.385 (0.833, 0.250)], consistency loss:0.2322, classify loss: 3.1511
epoch:461, time:14, labeled: [0.712 | 0.623 (0.905, 0.475)], unlabeled: [0.551 | 0.355 (0.632, 0.247)], unseen: [0.675 | 0.581 (0.818, 0.450)], consistency loss:0.2519, classify loss: 3.1907
epoch:466, time:14, labeled: [0.688 | 0.561 (0.941, 0.400)], unlabeled: [0.535 | 0.291 (0.613, 0.191)], unseen: [0.575 | 0.320 (0.800, 0.200)], consistency loss:0.2832, classify loss: 3.2678
epoch:471, time:14, labeled: [0.675 | 0.536 (0.938, 0.375)], unlabeled: [0.528 | 0.248 (0.609, 0.156)], unseen: [0.575 | 0.320 (0.800, 0.200)], consistency loss:0.2706, classify loss: 3.2920
epoch:476, time:14, labeled: [0.675 | 0.552 (0.889, 0.400)], unlabeled: [0.538 | 0.299 (0.620, 0.197)], unseen: [0.600 | 0.385 (0.833, 0.250)], consistency loss:0.2464, classify loss: 3.4532
epoch:481, time:14, labeled: [0.650 | 0.517 (0.833, 0.375)], unlabeled: [0.553 | 0.324 (0.664, 0.215)], unseen: [0.625 | 0.444 (0.857, 0.300)], consistency loss:0.2314, classify loss: 3.0036
epoch:486, time:14, labeled: [0.663 | 0.542 (0.842, 0.400)], unlabeled: [0.549 | 0.328 (0.641, 0.221)], unseen: [0.625 | 0.483 (0.778, 0.350)], consistency loss:0.2521, classify loss: 3.1336
epoch:491, time:14, labeled: [0.688 | 0.561 (0.941, 0.400)], unlabeled: [0.547 | 0.306 (0.654, 0.200)], unseen: [0.650 | 0.500 (0.875, 0.350)], consistency loss:0.2349, classify loss: 3.1225
epoch:496, time:14, labeled: [0.663 | 0.542 (0.842, 0.400)], unlabeled: [0.550 | 0.326 (0.649, 0.218)], unseen: [0.625 | 0.444 (0.857, 0.300)], consistency loss:0.2717, classify loss: 3.1223
epoch:501, time:14, labeled: [0.663 | 0.542 (0.842, 0.400)], unlabeled: [0.568 | 0.369 (0.683, 0.253)], unseen: [0.600 | 0.467 (0.700, 0.350)], consistency loss:0.2742, classify loss: 3.3268

我的解答思路和尝试过的方法
我想要达到的结果

实现一个达到mean teacher效果的实例,以对其进行进一步的研究。