前面提到AlexNet应用到具体任务如Oxford-IIIT Pets数据集的分类问题时,存在一些弊端。如:
过拟合:虽然AlexNet引入了Dropout等技术以减少过拟合,但由于其网络结构较大,模型参数众多,若训练数据不足够丰富或者多样,仍然容易发生过拟合。Oxford-IIIT Pets数据集相对较小,这可能导致模型在训练集上表现良好,但在实际应用或新的测试数据上表现不佳。参数数量和计算资源:AlexNet有超过6000万的参数和多个卷积层,这需要较高的计算资源和存储空间。对于Oxford-IIIT Pets这样的相对较小的数据集,使用如此庞大的网络可能不是最高效的选择。特征提取的局限性:AlexNet的网络架构主要是针对较大、较复杂的图像数据集(如ImageNet)设计的。它可能不完全适合处理具有高度相似性特征的小型数据集,比如宠物品种,其中许多品种的区分特征较细微,而AlexNet可能不足以捕捉这些细微的差异。泛化能力:由于AlexNet在设计时主要考虑的是通用性而非特定任务的优化,它可能在面对特定种类的图像(如各种宠物的特定品种)时,泛化能力不足。接下来介绍一种更小的模型,且能达到更快的训练速度,EfficientNetV2。
论文原文
《EfficientNetV2:更小的模型和更快的训练》,作者是 Mingxing Tan 和 Quoc V. Le,发表于 2021 年。文章介绍了 EfficientNetV2,这是一个新的卷积网络家族,相比以前的模型具有更快的训练速度和更好的参数效率。通过结合训练感知的神经架构搜索和缩放技术,这些模型在训练速度和参数效率上进行了联合优化。模型在新的操作,如 Fused-MBConv 的丰富搜索空间中搜索得到。实验表明,EfficientNetV2 模型的训练速度比最先进的模型快很多,同时参数数量减少了多达 6.8 倍。
论文还提出了一种改进的渐进式学习方法,通过在训练过程中逐步增加图像大小来加速训练,但这通常会导致准确率下降。为了弥补这种准确率的下降,他们提出通过适应性调整正则化(例如,数据增强)来补偿。利用这种渐进式学习,EfficientNetV2 在 ImageNet 和 CIFAR/Cars/Flowers 数据集上显著超过了以前的模型。通过在相同的 ImageNet21k 上进行预训练,EfficientNetV2 在 ImageNet ILSVRC2012 上达到了 87.3% 的顶级准确率,比最近的 ViT 提高了 2.0% 的准确率,同时训练速度提高了 5 倍至 11 倍。
这篇论文的主要贡献包括引入了一个新的更小、更快的模型家族;提出了一种改进的渐进式学习方法,该方法适应性地调整正则化和图像大小;在 ImageNet、CIFAR、Cars 和 Flowers 数据集上显示了高达 11 倍的训练速度和高达 6.8 倍的参数效率优势。
参考前一篇基于Alex Net的动物识别算法
网络模型的实现:
DropPath: 一个模块,用于实现随机深度(Stochastic Depth),通过随机丢弃路径来正则化网络。
ConvBNAct: 一个卷积块模块,包含卷积层、批量归一化层和激活层。
SqueezeExcite: 实现了Squeeze-and-Excitation(SE)模块,用于增强特征表示的通道注意力机制。
MBConv: 基于MobileNetV2的改进模块,包含扩展卷积、深度可分离卷积和Squeeze-and-Excitation模块。
FusedMBConv: MBConv的融合版本,将扩展卷积和线性投影卷积融合为一个卷积操作,以提高效率。
EfficientNetV2: 整个EfficientNetV2模型的主体,由一个stem卷积层、多个MBConv或FusedMBConv块组成的序列,以及一个分类头部(包含全连接层)组成。
efficientnetv2_s, efficientnetv2_m, efficientnetv2_l: 分别用于初始化不同大小(小、中、大)的EfficientNetV2模型的函数。
from collections import OrderedDict from functools import partial from typing import Callable, Optional import torch.nn as nn import torch from torch import Tensor def drop_path(x, drop_prob: float = 0., training: bool = False): """ Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). "Deep Networks with Stochastic Depth", https://arxiv.org/pdf/1603.09382.pdf This function is taken from the rwightman. It can be seen here: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py#L140 """ if drop_prob == 0. or not training: return x keep_prob = 1 - drop_prob shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) random_tensor.floor_() # binarize output = x.div(keep_prob) * random_tensor return output class DropPath(nn.Module): """ Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). "Deep Networks with Stochastic Depth", https://arxiv.org/pdf/1603.09382.pdf """ def __init__(self, drop_prob=None): super(DropPath, self).__init__() self.drop_prob = drop_prob def forward(self, x): return drop_path(x, self.drop_prob, self.training) class ConvBNAct(nn.Module): def __init__(self, in_planes: int, out_planes: int, kernel_size: int = 3, stride: int = 1, groups: int = 1, norm_layer: Optional[Callable[..., nn.Module]] = None, activation_layer: Optional[Callable[..., nn.Module]] = None): super(ConvBNAct, self).__init__() padding = (kernel_size - 1) // 2 if norm_layer is None: norm_layer = nn.BatchNorm2d if activation_layer is None: activation_layer = nn.SiLU # alias Swish (torch>=1.7) self.conv = nn.Conv2d(in_channels=in_planes, out_channels=out_planes, kernel_size=kernel_size, stride=stride, padding=padding, groups=groups, bias=False) self.bn = norm_layer(out_planes) self.act = activation_layer() def forward(self, x): result = self.conv(x) result = self.bn(result) result = self.act(result) return result class SqueezeExcite(nn.Module): def __init__(self, input_c: int, # block input channel expand_c: int, # block expand channel se_ratio: float = 0.25): super(SqueezeExcite, self).__init__() squeeze_c = int(input_c * se_ratio) self.conv_reduce = nn.Conv2d(expand_c, squeeze_c, 1) self.act1 = nn.SiLU() # alias Swish self.conv_expand = nn.Conv2d(squeeze_c, expand_c, 1) self.act2 = nn.Sigmoid() def forward(self, x: Tensor) -> Tensor: scale = x.mean((2, 3), keepdim=True) scale = self.conv_reduce(scale) scale = self.act1(scale) scale = self.conv_expand(scale) scale = self.act2(scale) return scale * x class MBConv(nn.Module): def __init__(self, kernel_size: int, input_c: int, out_c: int, expand_ratio: int, stride: int, se_ratio: float, drop_rate: float, norm_layer: Callable[..., nn.Module]): super(MBConv, self).__init__() if stride not in [1, 2]: raise ValueError("illegal stride value.") self.has_shortcut = (stride == 1 and input_c == out_c) activation_layer = nn.SiLU # alias Swish expanded_c = input_c * expand_ratio # 在EfficientNetV2中,MBConv中不存在expansion=1的情况所以conv_pw肯定存在 assert expand_ratio != 1 # Point-wise expansion self.expand_conv = ConvBNAct(input_c, expanded_c, kernel_size=1, norm_layer=norm_layer, activation_layer=activation_layer) # Depth-wise convolution self.dwconv = ConvBNAct(expanded_c, expanded_c, kernel_size=kernel_size, stride=stride, groups=expanded_c, norm_layer=norm_layer, activation_layer=activation_layer) self.se = SqueezeExcite(input_c, expanded_c, se_ratio) if se_ratio > 0 else nn.Identity() # Point-wise linear projection self.project_conv = ConvBNAct(expanded_c, out_planes=out_c, kernel_size=1, norm_layer=norm_layer, activation_layer=nn.Identity) # 注意这里没有激活函数,所有传入Identity self.out_channels = out_c # 只有在使用shortcut连接时才使用dropout层 self.drop_rate = drop_rate if self.has_shortcut and drop_rate > 0: self.dropout = DropPath(drop_rate) def forward(self, x: Tensor) -> Tensor: result = self.expand_conv(x) result = self.dwconv(result) result = self.se(result) result = self.project_conv(result) if self.has_shortcut: if self.drop_rate > 0: result = self.dropout(result) result += x return result class FusedMBConv(nn.Module): def __init__(self, kernel_size: int, input_c: int, out_c: int, expand_ratio: int, stride: int, se_ratio: float, drop_rate: float, norm_layer: Callable[..., nn.Module]): super(FusedMBConv, self).__init__() assert stride in [1, 2] assert se_ratio == 0 self.has_shortcut = stride == 1 and input_c == out_c self.drop_rate = drop_rate self.has_expansion = expand_ratio != 1 activation_layer = nn.SiLU # alias Swish expanded_c = input_c * expand_ratio # 只有当expand ratio不等于1时才有expand conv if self.has_expansion: # Expansion convolution self.expand_conv = ConvBNAct(input_c, expanded_c, kernel_size=kernel_size, stride=stride, norm_layer=norm_layer, activation_layer=activation_layer) self.project_conv = ConvBNAct(expanded_c, out_c, kernel_size=1, norm_layer=norm_layer, activation_layer=nn.Identity) # 注意没有激活函数 else: # 当只有project_conv时的情况 self.project_conv = ConvBNAct(input_c, out_c, kernel_size=kernel_size, stride=stride, norm_layer=norm_layer, activation_layer=activation_layer) # 注意有激活函数 self.out_channels = out_c # 只有在使用shortcut连接时才使用dropout层 self.drop_rate = drop_rate if self.has_shortcut and drop_rate > 0: self.dropout = DropPath(drop_rate) def forward(self, x: Tensor) -> Tensor: if self.has_expansion: result = self.expand_conv(x) result = self.project_conv(result) else: result = self.project_conv(x) if self.has_shortcut: if self.drop_rate > 0: result = self.dropout(result) result += x return result class EfficientNetV2(nn.Module): def __init__(self, model_cnf: list, num_classes: int = 1000, num_features: int = 1280, dropout_rate: float = 0.2, drop_connect_rate: float = 0.2): super(EfficientNetV2, self).__init__() for cnf in model_cnf: assert len(cnf) == 8 norm_layer = partial(nn.BatchNorm2d, eps=1e-3, momentum=0.1) stem_filter_num = model_cnf[0][4] self.stem = ConvBNAct(3, stem_filter_num, kernel_size=3, stride=2, norm_layer=norm_layer) # 激活函数默认是SiLU total_blocks = sum([i[0] for i in model_cnf]) block_id = 0 blocks = [] for cnf in model_cnf: repeats = cnf[0] op = FusedMBConv if cnf[-2] == 0 else MBConv for i in range(repeats): blocks.append(op(kernel_size=cnf[1], input_c=cnf[4] if i == 0 else cnf[5], out_c=cnf[5], expand_ratio=cnf[3], stride=cnf[2] if i == 0 else 1, se_ratio=cnf[-1], drop_rate=drop_connect_rate * block_id / total_blocks, norm_layer=norm_layer)) block_id += 1 self.blocks = nn.Sequential(*blocks) head_input_c = model_cnf[-1][-3] head = OrderedDict() head.update({"project_conv": ConvBNAct(head_input_c, num_features, kernel_size=1, norm_layer=norm_layer)}) # 激活函数默认是SiLU head.update({"avgpool": nn.AdaptiveAvgPool2d(1)}) head.update({"flatten": nn.Flatten()}) if dropout_rate > 0: head.update({"dropout": nn.Dropout(p=dropout_rate, inplace=True)}) head.update({"classifier": nn.Linear(num_features, num_classes)}) self.head = nn.Sequential(head) # initial weights for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode="fan_out") if m.bias is not None: nn.init.zeros_(m.bias) elif isinstance(m, nn.BatchNorm2d): nn.init.ones_(m.weight) nn.init.zeros_(m.bias) elif isinstance(m, nn.Linear): nn.init.normal_(m.weight, 0, 0.01) nn.init.zeros_(m.bias) def forward(self, x: Tensor) -> Tensor: x = self.stem(x) x = self.blocks(x) x = self.head(x) return x def efficientnetv2_s(num_classes: int = 1000): """ EfficientNetV2 https://arxiv.org/abs/2104.00298 """ # train_size: 300, eval_size: 384 # repeat, kernel, stride, expansion, in_c, out_c, operator, se_ratio model_config = [[2, 3, 1, 1, 24, 24, 0, 0], [4, 3, 2, 4, 24, 48, 0, 0], [4, 3, 2, 4, 48, 64, 0, 0], [6, 3, 2, 4, 64, 128, 1, 0.25], [9, 3, 1, 6, 128, 160, 1, 0.25], [15, 3, 2, 6, 160, 256, 1, 0.25]] model = EfficientNetV2(model_cnf=model_config, num_classes=num_classes, dropout_rate=0.2) return model def efficientnetv2_m(num_classes: int = 1000): """ EfficientNetV2 https://arxiv.org/abs/2104.00298 """ # train_size: 384, eval_size: 480 # repeat, kernel, stride, expansion, in_c, out_c, operator, se_ratio model_config = [[3, 3, 1, 1, 24, 24, 0, 0], [5, 3, 2, 4, 24, 48, 0, 0], [5, 3, 2, 4, 48, 80, 0, 0], [7, 3, 2, 4, 80, 160, 1, 0.25], [14, 3, 1, 6, 160, 176, 1, 0.25], [18, 3, 2, 6, 176, 304, 1, 0.25], [5, 3, 1, 6, 304, 512, 1, 0.25]] model = EfficientNetV2(model_cnf=model_config, num_classes=num_classes, dropout_rate=0.3) return model def efficientnetv2_l(num_classes: int = 1000): """ EfficientNetV2 https://arxiv.org/abs/2104.00298 """ # train_size: 384, eval_size: 480 # repeat, kernel, stride, expansion, in_c, out_c, operator, se_ratio model_config = [[4, 3, 1, 1, 32, 32, 0, 0], [7, 3, 2, 4, 32, 64, 0, 0], [7, 3, 2, 4, 64, 96, 0, 0], [10, 3, 2, 4, 96, 192, 1, 0.25], [19, 3, 1, 6, 192, 224, 1, 0.25], [25, 3, 2, 6, 224, 384, 1, 0.25], [7, 3, 1, 6, 384, 640, 1, 0.25]] model = EfficientNetV2(model_cnf=model_config, num_classes=num_classes, dropout_rate=0.4) return model
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377创建一个class_indices.json文件:
这个文件用于存储类别名称与索引之间的映射关系。在图像分类任务中,这个文件可以帮助将模型输出的索引转换为对应的类别名称,或者在数据预处理时根据类别索引来选择或处理数据。
{ "0": "Abyssinian", "1": "Bengal", "2": "Birman", "3": "Bombay", "4": "British", "5": "Egyptian", "6": "Maine", "7": "Persian", "8": "Ragdoll", "9": "Russian", "10": "Siamese", "11": "Sphynx", "12": "american", "13": "american_pit_bull_terrie", "14": "basset", "15": "beagle", "16": "boxer", "17": "chihuahua", "18": "english", "19": "english_setter", "20": "german", "21": "great", "22": "havanese", "23": "japanese", "24": "keeshond", "25": "leonberger", "26": "miniature", "27": "newfoundland", "28": "pomeranian", "29": "pug", "30": "saint", "31": "samoyed", "32": "scottish", "33": "shiba", "34": "staffordshire", "35": "wheaten", "36": "yorkshire" }
123456789101112131415161718192021222324252627282930313233343536373839接着来实现训练代码:
反正是常规操作,直接上代码。最后可视化训练过程。
import os import math import argparse import torch import torch.optim as optim from torch.utils.tensorboard import SummaryWriter from torchvision import transforms import torch.optim.lr_scheduler as lr_scheduler from model import efficientnetv2_s as create_model from my_dataset import MyDataSet from utils import read_split_data, train_one_epoch, evaluate import os os.environ["CUDA_VISIBLE_DEVICES"] = '0' import matplotlib.pyplot as plt def main(args): # 定义两个列表来存储损失和准确率 train_losses, val_losses, train_accs, val_accs = [], [], [], [] device = torch.device(args.device if torch.cuda.is_available() else "cpu") print("Using device:", device) print(args) print('Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/') tb_writer = SummaryWriter() if os.path.exists("./weights") is False: os.makedirs("./weights") train_images_path, train_images_label, val_images_path, val_images_label = read_split_data(args.data_path) img_size = {"s": [300, 384], # train_size, val_size "m": [384, 480], "l": [384, 480]} num_model = "s" data_transform = { "train": transforms.Compose([transforms.RandomResizedCrop(img_size[num_model][0]), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), "val": transforms.Compose([transforms.Resize(img_size[num_model][1]), transforms.CenterCrop(img_size[num_model][1]), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])} # 实例化训练数据集 train_dataset = MyDataSet(images_path=train_images_path, images_class=train_images_label, transform=data_transform["train"]) # 实例化验证数据集 val_dataset = MyDataSet(images_path=val_images_path, images_class=val_images_label, transform=data_transform["val"]) batch_size = args.batch_size nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8]) # number of workers print('Using {} dataloader workers every process'.format(nw)) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, pin_memory=True, num_workers=nw, collate_fn=train_dataset.collate_fn) val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=False, pin_memory=True, num_workers=nw, collate_fn=val_dataset.collate_fn) # 如果存在预训练权重则载入 model = create_model(num_classes=args.num_classes).to(device) if args.weights != "": if os.path.exists(args.weights): weights_dict = torch.load(args.weights, map_location=device) load_weights_dict = {k: v for k, v in weights_dict.items() if model.state_dict()[k].numel() == v.numel()} print(model.load_state_dict(load_weights_dict, strict=False)) else: raise FileNotFoundError("not found weights file: {}".format(args.weights)) # 是否冻结权重 if args.freeze_layers: for name, para in model.named_parameters(): # 除head外,其他权重全部冻结 if "head" not in name: para.requires_grad_(False) else: print("training {}".format(name)) pg = [p for p in model.parameters() if p.requires_grad] optimizer = optim.SGD(pg, lr=args.lr, momentum=0.9, weight_decay=1E-4) # Scheduler https://arxiv.org/pdf/1812.01187.pdf lf = lambda x: ((1 + math.cos(x * math.pi / args.epochs)) / 2) * (1 - args.lrf) + args.lrf # cosine scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) for epoch in range(args.epochs): # train train_loss, train_acc = train_one_epoch(model=model, optimizer=optimizer, data_loader=train_loader, device=device, epoch=epoch) scheduler.step() # validate val_loss, val_acc = evaluate(model=model, data_loader=val_loader, device=device, epoch=epoch) # 保存损失和准确率 train_losses.append(train_loss) val_losses.append(val_loss) train_accs.append(train_acc) val_accs.append(val_acc) tags = ["train_loss", "train_acc", "val_loss", "val_acc", "learning_rate"] tb_writer.add_scalar(tags[0], train_loss, epoch) tb_writer.add_scalar(tags[1], train_acc, epoch) tb_writer.add_scalar(tags[2], val_loss, epoch) tb_writer.add_scalar(tags[3], val_acc, epoch) tb_writer.add_scalar(tags[4], optimizer.param_groups[0]["lr"], epoch) torch.save(model.state_dict(), "./weights/model-{}.pth".format(epoch)) # 在训练循环结束后绘制损失和准确率图 plt.figure(figsize=(12, 6)) plt.subplot(1, 2, 1) plt.plot(train_losses, label='Train Loss') plt.plot(val_losses, label='Validation Loss') plt.title('Loss Over Epochs') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.subplot(1, 2, 2) plt.plot(train_accs, label='Train Accuracy') plt.plot(val_accs, label='Validation Accuracy') plt.title('Accuracy Over Epochs') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend() # 保存图像到本地 plt.savefig('./training_progress.png') plt.close() print('Training progress plot saved to ./training_progress.png') if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--num_classes', type=int, default=37) parser.add_argument('--epochs', type=int, default=100) parser.add_argument('--batch-size', type=int, default=64) parser.add_argument('--lr', type=float, default=0.01) parser.add_argument('--lrf', type=float, default=0.01) # 数据集所在根目录 # https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz parser.add_argument('--data-path', type=str, default="./datasets/images") # download model weights # 链接: https://pan.baidu.com/s/1uZX36rvrfEss-JGj4yfzbQ 密码: 5gu1 parser.add_argument('--weights', type=str, default='./pre_efficientnetv2-s.pth', help='initial weights path') parser.add_argument('--freeze-layers', type=bool, default=True) parser.add_argument('--device', default='cuda:0', help='device id (i.e. 0 or 0,1 or cpu)') opt = parser.parse_args() main(opt)
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182
推理代码实现:
模型训练好好,我们可以选择一些新的图片,让模型进行预测。
import os import json import torch from PIL import Image from torchvision import transforms import matplotlib.pyplot as plt import matplotlib from model import efficientnetv2_s as create_model # 设置Matplotlib的默认字体为支持中文的字体,这里使用黑体 matplotlib.rcParams['font.family'] = 'SimHei' matplotlib.rcParams['axes.unicode_minus'] = False def load_model(device, model_path, num_classes=37): model = create_model(num_classes=num_classes).to(device) model.load_state_dict(torch.load(model_path, map_location=device)) model.eval() return model def load_class_indices(json_path): with open(json_path, "r") as f: class_indict = json.load(f) return class_indict def predict_image(model, img_path, data_transform, device, class_indict): # 加载并转换图像 img = Image.open(img_path) img_transformed = data_transform(img) img_transformed = torch.unsqueeze(img_transformed, dim=0) # 进行预测 with torch.no_grad(): output = torch.squeeze(model(img_transformed.to(device))).cpu() predict = torch.softmax(output, dim=0) predict_cla = torch.argmax(predict).numpy() # 打印预测结果 print_res = "图片: {} - 类别: {} - 概率: {:.3}".format( os.path.basename(img_path), class_indict[str(predict_cla)], predict[predict_cla].numpy()) print(print_res) # 显示图像及标题 plt.figure(figsize=(5, 5)) plt.imshow(img) plt.title(f"{class_indict[str(predict_cla)]} - prob: {predict[predict_cla].numpy():.3f}") plt.show() def main(): device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") img_size = {"s": [300, 384], # 训练尺寸,验证尺寸 "m": [384, 480], "l": [384, 480]} num_model = "s" data_transform = transforms.Compose([ transforms.Resize(img_size[num_model][1]), transforms.CenterCrop(img_size[num_model][1]), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # 加载类别索引 json_path = './class_indices.json' assert os.path.exists(json_path), "文件: '{}' 不存在。".format(json_path) class_indict = load_class_indices(json_path) # 创建并加载模型 model = load_model(device, "./weights/model-29.pth", num_classes=37) # 要处理的文件夹 folder_path = "./test_images" assert os.path.exists(folder_path), "文件夹 '{}' 不存在。".format(folder_path) for img_file in os.listdir(folder_path): if img_file.lower().endswith(('.png', '.jpg', '.jpeg', '.gif', '.bmp')): img_path = os.path.join(folder_path, img_file) predict_image(model, img_path, data_transform, device, class_indict) if __name__ == '__main__': main()
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081EfficientNetV2采用了更有效的模型设计和训练策略,以在参数数量相对较少的情况下提高性能。对于动物分类等任务,EfficientNetV2能够通过学习更具有代表性的特征,从而提高分类准确性。
主要是快。
相关知识
基于EfficientNetV2的动物识别
基于姿态估计的动物行为识别研究进展
【计算机科学】【2019.03】基于深度学习的动物识别
基于声音分析的动物情绪识别
基于RFID的动物身份识别与行为分析.pptx
一种基于人工智能的智能动物情绪识别操控系统
一种基于人工智能的智能动物情绪识别操控系统技术方案
基于卷积神经网络通过声音识别动物情绪的方法及系统
基于卷积神经网络通过声音识别动物情绪的方法及系统与流程
一种基于人工智能的智能动物情绪识别操控系统.pdf
网址: 基于EfficientNetV2的动物识别 https://m.mcbbbk.com/newsview683599.html
上一篇: 深度学习—利用TensorFlo |
下一篇: WildFish鱼类数据库 |