Analysis of an Autonomous Driving System

Authors

DOI:

https://doi.org/10.46842/ipn.cien.v27n2a06

Keywords:

artificial intelligence, autonomous driving, LIDAR, object detection algorithm, YOLO

Abstract

Due to the technological advance in automation and artificial intelligence applied to the autonomy of vehicles, it has caused the levels of assistance for autonomous driving to be more relevant since, according to the INEGI from 2016 to 2020 there is a sustained decrease in the number of deceases in traffic accidents. [1] That is why the Society of Automotive Engineers SAE standardized a classification where 6 levels of driving assistance are defined which includes driving without automation to total autonomous driving. [2] This article shows the operation of an autonomous driving system having an assistance level 3 carried out through Matlab Simulink, a scenario was developed by means of an UNREAL ENGINE graphic engine that had a realistic environment that shows pedestrians and car traffic as well as different track and road layouts. The performance tests were carried out in a simulated environment where an object detector algorithm called You Only Look Once in its version 2 is used. YOLO oversees detecting cars, pedestrians, and signs by means of a camera and a LIDAR sensor. (Light Detection and Range) to extend the field of vision artificially.

References

Instituto Nacional de Estadística y Geografía (INEGI), "Georreferenciación de accidentes de tránsito en zonas urbanas", 2021, url: https://www.inegi.org.mx/contenidos/saladeprensa/boletines/2021/accidentes/ACCIDENTES_2021.pdf

SAE International, "Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles J3016_202104", 2022, irl: https://saemobilus.sae.org/content/j3016_202104

O. Flores, M. Fabela, D. Vázquez, R. Hernández. "Conducción autónoma: Implicaciones", Publicación bimestral de divulgación externa/ Instituto Mexicano del Transporte, no. 172, 2018, url: https://imt.mx/resumenboletines.html?IdArticulo=462&IdBoletin=172

X. Basogain Olabe, Redes Neuronales artificiales y sus aplicaciones (curso), Escuela Superior de Ingeniería de Bilbao, 2008.

J. Redmon, A. Farhadi, "YOLO9000: Better, Faster, Stronger", en 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 21–26 de julio de 2017. IEEE, 2017.

J. Redmon, S. Divvala, R. Girshick, A. Farhadi. “You only look once: Unified, real-time object detection”, arXiv preprint Cornell University, 2015, url: https://arxiv.org/abs/1506.02640

D. T. Nguyen, T. N. Nguyen, H. Kim, H. J. Lee, "A High-Throughput and Power-Efficient FPGA Implementation of YOLO CNN for Object Detection," in IEEE Transactions on Very Large-Scale Integration (VLSI) Systems, vol. 27, no. 8, pp. 1861-1873, Aug. 2019, doi: https://doi.org/10.1109/TVLSI.2019.2905242

J. Redmon. Darknet: Open-source neural networks in C, 2016, url: http://pjreddie.com/darknet/

"YOLO: Real-Time Object Detection". Survival Strategies for the Robot Rebellion. https://pjreddie.com/darknet/yolov2/ (accedido el 20 de diciembre de 2022).

Y. Li, J. Ibanez-Guzman, "Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems", IEEE Signal Processing Magazine, vol. 37, 2020.

A. Bar et al., "The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing", IEEE Signal Processing Magazine, vol. 38, 2021.

"Unreal Engine | The most powerful real-time 3D creation tool". Unreal Engine. https://www.unrealengine.com/en-US

MathWorks, "Highway Lane Following- MATLAB & Simulink- MathWorks América Latina", 2023.

MathWorks, "Detect vehicles in lidar using image labels- MATLAB & simulink-mathworks américa latina", 2023.

Downloads

Published

10-09-2024

How to Cite