首页 | 官方网站   微博 | 高级检索  
     

融合前景注意力的轻量级交通标志检测网络
引用本文:俞林森,陈志国. 融合前景注意力的轻量级交通标志检测网络[J]. 电子测量与仪器学报, 2023, 37(1): 21-31
作者姓名:俞林森  陈志国
作者单位:江南大学人工智能与计算机学院 无锡214122;江南大学人工智能与计算机学院 无锡214122;江南大学先进技术研究院 无锡214122;江苏省模式识别与计算智能工程实验室 无锡 214122;江南大学模式识别与计算智能国际联合实验室 无锡 214122
基金项目:国家自然科学基金(62073155)项目资助
摘    要:针对目标检测算法模型在交通标志检测上容易出现错检和漏检等问题,提出一种融合前景注意力的轻量级交通标志检测网络YOLOT。首先引入SiLU激活函数,提升模型检测的准确率;其次设计了一种基于鬼影模块的轻量级骨干网络,有效提取目标物特征;接着引入前景注意力感知模块,抑制背景噪声;然后改进路径聚合网络,加入残差结构,充分学习底层特征信息;最后使用VariFocalLoss和GIoU,分别计算目标的分类损失和目标间的相似度,使目标的分类和定位更加准确。在多个数据集上进行了大量实验,结果表明,本文方法的精度优于目前最先进方法,在CCTSDB数据集上进行消融实验,最终精度达到98.50%,与基线模型相比,准确率提升1.32%,同时模型仅4.7 MB,实时检测帧率达到44 FPS。

关 键 词:交通标志检测  激活函数  前景注意力  特征融合  VariFocalLoss  GIoU

Lightweight traffic sign detection network with fused foreground attention
Yu Linsen,Chen Zhiguo. Lightweight traffic sign detection network with fused foreground attention[J]. Journal of Electronic Measurement and Instrument, 2023, 37(1): 21-31
Authors:Yu Linsen  Chen Zhiguo
Affiliation:1. School of Artificial Intelligence and Computer Science, Jiangnan University; 1. School of Artificial Intelligence and Computer Science, Jiangnan University,2. Institute of Advanced Technology,Jiangnan University,3. Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence,4. International Joint Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University
Abstract:A lightweight traffic sign detection network incorporating foreground attention, YOLOT, is proposed to address the problemthat object detection algorithm models are prone to error and miss detection on traffic sign detection. Firstly, the introduction of the SiLUactivation function to improve the accuracy of model detection; secondly, a lightweight backbone network based on the ghost module isdesigned to effectively extract object features; thirdly, introduction of foreground attention perception module to suppress backgroundnoise; fourthly, we improve the path aggregation network by adding a residual structure to the feature fusion process; finally, we useVariFocalLoss and GIoU to calculate the classification loss of objects and the similarity between objects. Extensive experiments areconducted on several datasets, and the results show that the accuracy of the method in this paper is better than the current state-of-the-artmethods. Ablation experiments are conducted on the CCTSDB dataset, and the final accuracy reaches 98. 50%, with an accuracyimprovement of 1. 32% compared to the baseline model, while the model is only 4. 7 MB, and the real-time detection frame rate reaches44 frames per second.
Keywords:traffic sign detection   activation function   foreground attention   feature fusion   VariFocalLoss   GIoU
本文献已被 万方数据 等数据库收录!
点击此处可从《电子测量与仪器学报》浏览原始摘要信息
点击此处可从《电子测量与仪器学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号