To search, Click
below search items.
|
|

All
Published Papers Search Service
|
Title
|
Comparative of VGG and YOLO Models for Traffic Sign Under Weather Conditions Image Detection and Classification
|
Author
|
Amal Alshahrani, Leen Alshrif, Fatima Bajawi, Razan Alqarni, Reem Alharthi, and Haneen Alkurbi
|
Citation |
Vol. 25 No. 5 pp. 11-20
|
Abstract
|
This study focuses on enhancing the accuracy of traffic sign detection systems for self-driving. With the increasing proliferation of autonomous vehicles, reliable detection and interpretation of traffic signs is crucial for road safety and efficiency. The primary goal of this research was to improve the performance of traffic sign detection, particularly in identifying unfamiliar signs and dealing with adverse weather conditions. We obtained a dataset of 3,480 images from Roboflow and utilized deep learning techniques, including Convolutional Neural Networks (CNNs) and algorithms such as YOLO and the Vision Engineering (VGG) toolkit. Unlike previous studies that focused on a single version of YOLO, this study conducted a comparative analysis of different deep-learning models, including YOLOv5, YOLOv8, and VGG-16. The study results show promising outcomes, with YOLOv5 achieving an accuracy of up to 94.2%, YOLOv8 reaching 95.3% accuracy, and VGG-16 outperforming the other techniques with an impressive 100% accuracy. These findings highlight the significant potential for future advancements in traffic sign detection systems, contributing to the ongoing efforts to enhance the safety and efficiency of autonomous driving technologies.
|
Keywords
|
Traffic signs, Detection, Classification
|
URL
|
http://paper.ijcsns.org/07_book/202505/20250502.pdf
|
|