SlideShare a Scribd company logo
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 1
Self-Driving Cars: Automation Testing Using Udacity Simulator
Shahzeb Ali1
AEE
Dept. Of Automotive Electronics Eng., Coventry University, Coventry, UK
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Daily new and innovative cars with top features
are being launched around the globe. But, the mention of self-
driven cars still garners a lot of attention. The work on self-
driven cars is going on for quite some time, but they are still
not fully prepared to be launched in the market. One of the
reasons can be, their inability to understand the common day-
to-day etiquettes followed by the society on the road. Hence,
this paper proposed a methodology to perform automated
testing for Self-Driving Vehicles using Udacity Simulator
Key Words: Machine Learning, Self-Driving, Behavioral
Cloning, Simulator
1. INTRODUCTION
We live in an exciting period where each day more and more
technology developmentsareunderprocessandautomation
is one of the blessings of it. Automation is a way or method
to deliver the output with minimal intervention from
humans and in some case, there is no human intervention.
These automations provide an accuracy level which is quite
difficult to achieve manually and are highly beneficial in
fields such as automotive industry, Aerospace, and military
where the possibility to have an error in calculation or
output is negligible.
Machine Learning has also grown widely in the field of
Automotive Engineering and many vehiclestodayaredriven
based on the algorithms studied through machine learning.
With the adequate help from machine learning we can
develop models that can help the vehicle to takethedecision
automatically and perform the task with great precession.
One of the tasks which is performed using this learning is
developing a model for self-driving car andtestitona totally
new track and environment for autonomous testing.
Following few interesting technologies provide a clear
picture of the methodologies currently available in the
market for simulating and testing the Self-Driving vehicles.
A deep learning algorithm which user virtual environment
for self-driving vehicle [2]. The data is collectedusinganend
to end method and Nvidia autonomous driving techniques.
After the successful collection of training data,
AlexNet Convolutional Neural Network a patternrecognition
method is used for the training. Once successfully trained on
this virtual environment data the toy car is used forshowing
how the model performs in the real-world scenario.
Another method followed for implementing and testingself-
driving cars is a simulator and algorithm based automated
testing, it assesses the performance of the self-driving car
using simulator. The simulation system will create different
randomly, manually modifiedscenarioswhicharegenerated
based on an algorithm. The genetic algorithm will be used
for augmenting both manual and random scenes to identify
the failures across the system.[4].
Developing a model which maps raw images can also be
used. The focus is to develop a model that can map different
raw images captured to perform training and testing to a
correct steering angle usingthedeeplearningalgorithm.The
data is collected using a vehicle platform built using 1/10
scale RC Car, Raspberry pi 3 Model B computer and Front
facing camera [5].
Prototype based approach for testing Self-Driving cars
includes creating a self-driving car prototype (using
simulation) that uses cameras fornavigationandgenerates a
better output. For prototype purpose a 3D virtual city is
designed which depicts a real environment with traffic cars,
traffic signals and different type of obstacles. The sole target
of our self-driving vehicle should navigate easily without
violating any traffic regulations and hitting any unwanted
obstacles. It should be quick efficientandcomfortablesothat
fuel consumption is reduced, and minimum jerks are felt
throughout the journey. [1]
In this paper, we are using the Udacity Simulatorfortraining
and the testing of the data. Udacity Simulator provides us
with two different set of modes i.e. Training Mode and
Autonomous Mode. The user drives the vehicle manually
using the keyboard keys for generationofdata i.e.images are
captured using three differentcamerasattachedonthefront,
left and right side of the vehicle and also the steeringangleis
captured for different turns and corners.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 2
Figure -1: Udacity Simulator
Once the data is collected, it is processed and trained usinga
deep neural network and provides a trained model which is
later used in the autonomous mode, whereuserusesServer-
client mechanism to run the vehicle in autonomous mode.
Figure -2: Block Diagram: Self Driving Cars
The main outcomes we can achieve from this paper are:
 Providing a method to shift the manual testing
of vehicles to a safer and accurate automated
method of testing.
 The image processing and training techniques
helps in providing a high accuracy for the
vehicle to be tested in an environment which is
totally new and different from the track on
which it is trained on.
2. DATA COLLECTION: TRAINING MODE
To perform the Training on a neural network the data
needs to be collected for driving the vehicle on one of the
tracks provided in the Udacity simulator.
The images from the three different cameras which are
connected to the vehicle on the front hood, left mirror and
right mirror in the virtual environment are collected
continuously as we are recording in the training mode along
with the steering angle for the different turns on the road.
The images whicharecollectedwhiledrivingarestoredin
the desired path which can be set at the beginning of the
training process. Along with the images, there is driving log
.csv file generated which consist of following different
columns:
Figure -3: .csv file column Description
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 3
Figure -4: Driving Log.csv File
The vehicle is driven on the track using the following keys of
the Keyboard i.e.
a. Upper-Arrow: Controls the straight direction
b. Lower-Arrow: Reverse Direction
c. Left-Arrow: Left Turn
d. Right Arrow: Right Turn
The following image depicts the different images which
are captured for training of the vehicle in the virtual
environment.
Figure -5: Image captured from Right Side Camera
Figure -6: Image captured from Left Side Camera
Figure -7: Image captured from Center Camera
3. IMAGE PROCESSING AND DATA AUGMENTATION
Once the data is collected successfully, the next step involves
in training of data is the process of performing Image
Augmentation and image processing. The images collected
from recording of the driving of vehicle are not uniform, it is
highly possible thatcenter imagesaremoreorleftimagesare
more or vice-versa, there is a high possibility that images
contain various pixels which are not useful for good training
of data. Hence, it is better to perform some of the image
Augmentation and Image processing Techniques. [14]
Following are some of the Image Augmentation techniques
used:
 Image Zooming
This method is used for magnifying image such that
a better view of important image pixels is clear. In
zooming the pixels are inserted into the image to
increase the size of the image. It is made possible by
pixel replication or interpolation.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 4
Figure -8: Image Zooming
 Image Panning
The image panning is a augmentation technique
which involves horizontal repositioning of the
image. It helps in keeping the main object or subject
in focus and blurring less essential background.
Figure -9: Image Panning
 Image Random Flip
This technique is used tofliptheimageshorizontally
or vertically depending on the values specified. It is
one of the methods used for uniformity.
Figure -10: Random Flipping
 Image Brightness
The image brightness techniques are used to
increase the intensity oftheimageacquiredfromthe
training dataset and making it clear for better
training using CNN.
Figure -11: Image Zooming
Following are some of the Image Processing
techniques used:
 Image Resizing
The image resizing techniqueis used forvaryingthe
height and width of the image such that images are
uniform and the main object required for training is
clearly visible and unnecessary area is cropped.
 Image Gaussian Blur
A noise reduction technique used for blurring the
images. The images are blurred using a Gaussian
function.
Where,
a= height of the curve’s peak
b= position of the center of the peak
c= Standard deviation
f(x)= function of x
e= Euler’s number
x= Integer
 Image Color Conversion
The imagesare converted from RGB toYUV,theYUV
color spaces are more efficient and useful in the
processing of images.
Figure -12: Image Color Conversion
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 5
4. LEARNING FROM TRAINING DATA
Once the data is successfully captured and images are
processed and Augmented, next step involves the splittingof
data into training set and validation set. The function train
test split is used for splitting of the data. The data is split in
such a way that each set contains normalized data, with
somewhat equal number of images and varieties. Each set
consists of following number of images:
 Training samples: 2954
 Validation samples: 1456
Figure-13:Graphicalrepresentation:Trainingset,Validation
set
After splitting and processing of data, the most important
step is learning from the training data using a Nvidia
ConvolutionalNeuralNetwork(CNN)model,thismodelhelps
in accurate mapping of the raw pixels from the images
collected to the steering angle recorded in the driving log.
The CNN helps in learning the data automatically based on
the images fed into the Convolutional Neural Network and
model is tuned by varying the hyper parameters values and
adjusting the weights.
The Nvidia network architecture used consist of 9 layers out
of which 5 are Convolutional layers and 4 fully connected
layers. The size of the input image is 66 x 200 x 3(Height x
width x Depth).
Figure -14: Network Parameters
The optimizer used in the network is ‘Adam’ andthelearning
rate is kept at . The activation function used for this
model is elu (exponential linear unit)
Figure -15: Model Characteristic
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 6
Figure -16: CNN Architecture (Mariusz Bojarsk, 2016)
5. TESTING IN AUTONOMOUS MODE
Once the model is completely tuned and has all hyper
parameters set to a valid value.Themodelisexecutedfor592
epochs and each epoch consists of 400 samples. The final
validation loss is approximately 0.035.
Figure -17: Testing on a new track-Image1
Figure -18: Testing on a new Track-Image 2
Figure -19: Testing on a new Track-Image 3
The model.h5 file is generated. H5 file is a types of file format
which is used to store structure data, in our case this consist
of the tuned model which will be feed back into the Udacity
simulator using server-client script for driving the vehicle in
autonomous mode. The vehicle is driven on the track on
which it is trained and on a totally new track which is a
testing track as the vehicle is totally unawareoftheturnsand
corners of the track. The vehicle performs successfully on
both the tracks withless deviation from the centerofthelane
and avoiding collision throughout the journey.
CONCLUSIONS
The Self-driving carsaretheupcomingtechnologyandwillbe
the most successful in today’s world with fuel prices hiking
and pollution is increasing each day.Theself-drivingcarscan
help in not only cost-cutting but also reduce a lot of pollution
from the surroundings as they are mostly hybrid or fully
electric vehicles.
As a future work, we will be working on infusing
object detecting features so that the vehicle can operate on
the real-world environment with proper control and speed
variation based on the objects and surroundings.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072
© 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 7
REFERENCES
[1] Bhaskar Barua, C. G. (2019). A Self-Driving Car
Implementation Using Computer Vision for Detection
and Navigation. IEEE.
[2] Juntae Kim, G. Y. (n.d.). Deep Learning Algorithm using
virtual Environment data for Self-Driving Car. IEEE.
[3] Naoki.(2017).https://naokishibuya.medium.com/introd
uction-to-udacity-self-driving-car-simulator-
4d78198d301d. Retrieved from
https://naokishibuya.medium.com/.
[4] Straub, J. (2017). Automated Testing of a Self-Driving
Vehicle System. IEEE.
[5] Truong-Dong do, M. T.-V.-H. (2018). Real-Time Self-
Driving Car Navigation Using Deep Neural Network.
IEEE.
[6] Mariusz Bojarsk, D. D. (2016). End to End Learning for
Self-Driving Cars. arxiv.org
[7] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich
feature hierarchies for accurate object detection and
semantic segmentation, “in Proceedings of the IEEE
conference on computer vision and pattern recognition,
2014, pp. 580-587.
[8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E.
Tzeng, and T. Darrell, "Decaf: A deep Convolutional
activation feature for generic visual recognition," in
International conference on machine learning,2014, pp.
647-655.
[9] L. Fei-Fei, R. Fergus, and P. Perona, “Learning
generative visual models from few training examples:
An incremental Bayesian approach tested on 101
object categories,” Computer vision Image
understanding, vol. 106, no. 1, pp. 59-70, 2007.
[10] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba,
"Sun database: Large-scale scene recognition from
abbey to zoo," in Computer vision and pattern
recognition (CVPR), 2010 IEEE conference on, 2010,
pp. 3485-3492.
[11] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich
feature hierarchies for accurate object detection and
semantic segmentation," in Proceedings of the IEEE
conference on computer vision and pattern
recognition, 2014, pp. 580-587.
[12] A. Krizhevsky, I. Sutskever, and G. E. Hinton,
"ImageNet classification with deep Convolutional
neural networks," in Advances in neural information
processing systems, 2012, pp. 1097-1105.conference
on, 2010, pp. 3485-3492.
[13] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E.
Tzeng, and T. Darrell, "Decaf: A deep Convolutional
activation feature for generic visual recognition," in
International conference on machine learning,
2014, pp. 647-655.
[14] https://www.udemy.com/course/applied-deep-
learningtm-the-complete-self-driving-car-course/

More Related Content

PPTX
Self driving car
PDF
Digital Marketing Portfolio
PDF
PPT
LinkedIn features guide
PPTX
Image Encryption in java ppt.
PPTX
face mask detection ppt66 (2).pptx
PPTX
YouTube Ads Products - Digital Media Decks
PDF
IRJET- Parking Space Detection using Image Processing in MATLAB
Self driving car
Digital Marketing Portfolio
LinkedIn features guide
Image Encryption in java ppt.
face mask detection ppt66 (2).pptx
YouTube Ads Products - Digital Media Decks
IRJET- Parking Space Detection using Image Processing in MATLAB

Similar to IRJET- Self-Driving Cars: Automation Testing using Udacity Simulator (20)

PDF
Self Driving Car
PDF
Self-Driving Car to Drive Autonomously using Image Processing and Deep Learning
PDF
IRJET-Survey on Simulation of Self-Driving Cars using Supervised and Reinforc...
PPTX
AI powered CAR DAMAGE detection project PPT
PDF
An Experimental Analysis on Self Driving Car Using CNN
PDF
IRJET - Steering Wheel Angle Prediction for Self-Driving Cars
PDF
Intelligent Parking Space Detection System Based on Image Segmentation
PDF
CAR DAMAGE DETECTION AND PRICE PREDICTION USING DEEP LEARNING
PDF
IRJET- Number Plate Extraction from Vehicle Front View Image using Image ...
PDF
Traffic Sign Recognition Model
PDF
AUTONOMOUS SELF DRIVING CARS
PDF
IRJET- Self Driving RC Car using Behavioral Cloning
PDF
Predicting Steering Angle for Self Driving Vehicles
PDF
Face Recognition Based on Image Processing in an Advanced Robotic System
PDF
License plate extraction of overspeeding vehicles
PDF
IRJET- Smart Traffic Control System using Image Processing
PDF
Novel-design-Panoramic-camera-by dr MONIKA
PDF
Next generation engine immobiliser
PDF
Next generation engine immobiliser
DOCX
A machine learning model for average fuel consumption in heavy vehicles
Self Driving Car
Self-Driving Car to Drive Autonomously using Image Processing and Deep Learning
IRJET-Survey on Simulation of Self-Driving Cars using Supervised and Reinforc...
AI powered CAR DAMAGE detection project PPT
An Experimental Analysis on Self Driving Car Using CNN
IRJET - Steering Wheel Angle Prediction for Self-Driving Cars
Intelligent Parking Space Detection System Based on Image Segmentation
CAR DAMAGE DETECTION AND PRICE PREDICTION USING DEEP LEARNING
IRJET- Number Plate Extraction from Vehicle Front View Image using Image ...
Traffic Sign Recognition Model
AUTONOMOUS SELF DRIVING CARS
IRJET- Self Driving RC Car using Behavioral Cloning
Predicting Steering Angle for Self Driving Vehicles
Face Recognition Based on Image Processing in an Advanced Robotic System
License plate extraction of overspeeding vehicles
IRJET- Smart Traffic Control System using Image Processing
Novel-design-Panoramic-camera-by dr MONIKA
Next generation engine immobiliser
Next generation engine immobiliser
A machine learning model for average fuel consumption in heavy vehicles
Ad

More from IRJET Journal (20)

PDF
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
PDF
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
PDF
Kiona – A Smart Society Automation Project
PDF
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
PDF
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
PDF
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
PDF
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
PDF
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
PDF
BRAIN TUMOUR DETECTION AND CLASSIFICATION
PDF
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
PDF
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
PDF
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
PDF
Breast Cancer Detection using Computer Vision
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Kiona – A Smart Society Automation Project
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
BRAIN TUMOUR DETECTION AND CLASSIFICATION
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Breast Cancer Detection using Computer Vision
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Ad

Recently uploaded (20)

PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
additive manufacturing of ss316l using mig welding
PPTX
Sustainable Sites - Green Building Construction
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPTX
Internet of Things (IOT) - A guide to understanding
PPT
Project quality management in manufacturing
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
Construction Project Organization Group 2.pptx
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
UNIT 4 Total Quality Management .pptx
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Embodied AI: Ushering in the Next Era of Intelligent Systems
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Model Code of Practice - Construction Work - 21102022 .pdf
bas. eng. economics group 4 presentation 1.pptx
additive manufacturing of ss316l using mig welding
Sustainable Sites - Green Building Construction
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Internet of Things (IOT) - A guide to understanding
Project quality management in manufacturing
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Construction Project Organization Group 2.pptx
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
UNIT 4 Total Quality Management .pptx
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf

IRJET- Self-Driving Cars: Automation Testing using Udacity Simulator

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072 © 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 1 Self-Driving Cars: Automation Testing Using Udacity Simulator Shahzeb Ali1 AEE Dept. Of Automotive Electronics Eng., Coventry University, Coventry, UK ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - Daily new and innovative cars with top features are being launched around the globe. But, the mention of self- driven cars still garners a lot of attention. The work on self- driven cars is going on for quite some time, but they are still not fully prepared to be launched in the market. One of the reasons can be, their inability to understand the common day- to-day etiquettes followed by the society on the road. Hence, this paper proposed a methodology to perform automated testing for Self-Driving Vehicles using Udacity Simulator Key Words: Machine Learning, Self-Driving, Behavioral Cloning, Simulator 1. INTRODUCTION We live in an exciting period where each day more and more technology developmentsareunderprocessandautomation is one of the blessings of it. Automation is a way or method to deliver the output with minimal intervention from humans and in some case, there is no human intervention. These automations provide an accuracy level which is quite difficult to achieve manually and are highly beneficial in fields such as automotive industry, Aerospace, and military where the possibility to have an error in calculation or output is negligible. Machine Learning has also grown widely in the field of Automotive Engineering and many vehiclestodayaredriven based on the algorithms studied through machine learning. With the adequate help from machine learning we can develop models that can help the vehicle to takethedecision automatically and perform the task with great precession. One of the tasks which is performed using this learning is developing a model for self-driving car andtestitona totally new track and environment for autonomous testing. Following few interesting technologies provide a clear picture of the methodologies currently available in the market for simulating and testing the Self-Driving vehicles. A deep learning algorithm which user virtual environment for self-driving vehicle [2]. The data is collectedusinganend to end method and Nvidia autonomous driving techniques. After the successful collection of training data, AlexNet Convolutional Neural Network a patternrecognition method is used for the training. Once successfully trained on this virtual environment data the toy car is used forshowing how the model performs in the real-world scenario. Another method followed for implementing and testingself- driving cars is a simulator and algorithm based automated testing, it assesses the performance of the self-driving car using simulator. The simulation system will create different randomly, manually modifiedscenarioswhicharegenerated based on an algorithm. The genetic algorithm will be used for augmenting both manual and random scenes to identify the failures across the system.[4]. Developing a model which maps raw images can also be used. The focus is to develop a model that can map different raw images captured to perform training and testing to a correct steering angle usingthedeeplearningalgorithm.The data is collected using a vehicle platform built using 1/10 scale RC Car, Raspberry pi 3 Model B computer and Front facing camera [5]. Prototype based approach for testing Self-Driving cars includes creating a self-driving car prototype (using simulation) that uses cameras fornavigationandgenerates a better output. For prototype purpose a 3D virtual city is designed which depicts a real environment with traffic cars, traffic signals and different type of obstacles. The sole target of our self-driving vehicle should navigate easily without violating any traffic regulations and hitting any unwanted obstacles. It should be quick efficientandcomfortablesothat fuel consumption is reduced, and minimum jerks are felt throughout the journey. [1] In this paper, we are using the Udacity Simulatorfortraining and the testing of the data. Udacity Simulator provides us with two different set of modes i.e. Training Mode and Autonomous Mode. The user drives the vehicle manually using the keyboard keys for generationofdata i.e.images are captured using three differentcamerasattachedonthefront, left and right side of the vehicle and also the steeringangleis captured for different turns and corners.
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072 © 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 2 Figure -1: Udacity Simulator Once the data is collected, it is processed and trained usinga deep neural network and provides a trained model which is later used in the autonomous mode, whereuserusesServer- client mechanism to run the vehicle in autonomous mode. Figure -2: Block Diagram: Self Driving Cars The main outcomes we can achieve from this paper are:  Providing a method to shift the manual testing of vehicles to a safer and accurate automated method of testing.  The image processing and training techniques helps in providing a high accuracy for the vehicle to be tested in an environment which is totally new and different from the track on which it is trained on. 2. DATA COLLECTION: TRAINING MODE To perform the Training on a neural network the data needs to be collected for driving the vehicle on one of the tracks provided in the Udacity simulator. The images from the three different cameras which are connected to the vehicle on the front hood, left mirror and right mirror in the virtual environment are collected continuously as we are recording in the training mode along with the steering angle for the different turns on the road. The images whicharecollectedwhiledrivingarestoredin the desired path which can be set at the beginning of the training process. Along with the images, there is driving log .csv file generated which consist of following different columns: Figure -3: .csv file column Description
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072 © 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 3 Figure -4: Driving Log.csv File The vehicle is driven on the track using the following keys of the Keyboard i.e. a. Upper-Arrow: Controls the straight direction b. Lower-Arrow: Reverse Direction c. Left-Arrow: Left Turn d. Right Arrow: Right Turn The following image depicts the different images which are captured for training of the vehicle in the virtual environment. Figure -5: Image captured from Right Side Camera Figure -6: Image captured from Left Side Camera Figure -7: Image captured from Center Camera 3. IMAGE PROCESSING AND DATA AUGMENTATION Once the data is collected successfully, the next step involves in training of data is the process of performing Image Augmentation and image processing. The images collected from recording of the driving of vehicle are not uniform, it is highly possible thatcenter imagesaremoreorleftimagesare more or vice-versa, there is a high possibility that images contain various pixels which are not useful for good training of data. Hence, it is better to perform some of the image Augmentation and Image processing Techniques. [14] Following are some of the Image Augmentation techniques used:  Image Zooming This method is used for magnifying image such that a better view of important image pixels is clear. In zooming the pixels are inserted into the image to increase the size of the image. It is made possible by pixel replication or interpolation.
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072 © 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 4 Figure -8: Image Zooming  Image Panning The image panning is a augmentation technique which involves horizontal repositioning of the image. It helps in keeping the main object or subject in focus and blurring less essential background. Figure -9: Image Panning  Image Random Flip This technique is used tofliptheimageshorizontally or vertically depending on the values specified. It is one of the methods used for uniformity. Figure -10: Random Flipping  Image Brightness The image brightness techniques are used to increase the intensity oftheimageacquiredfromthe training dataset and making it clear for better training using CNN. Figure -11: Image Zooming Following are some of the Image Processing techniques used:  Image Resizing The image resizing techniqueis used forvaryingthe height and width of the image such that images are uniform and the main object required for training is clearly visible and unnecessary area is cropped.  Image Gaussian Blur A noise reduction technique used for blurring the images. The images are blurred using a Gaussian function. Where, a= height of the curve’s peak b= position of the center of the peak c= Standard deviation f(x)= function of x e= Euler’s number x= Integer  Image Color Conversion The imagesare converted from RGB toYUV,theYUV color spaces are more efficient and useful in the processing of images. Figure -12: Image Color Conversion
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072 © 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 5 4. LEARNING FROM TRAINING DATA Once the data is successfully captured and images are processed and Augmented, next step involves the splittingof data into training set and validation set. The function train test split is used for splitting of the data. The data is split in such a way that each set contains normalized data, with somewhat equal number of images and varieties. Each set consists of following number of images:  Training samples: 2954  Validation samples: 1456 Figure-13:Graphicalrepresentation:Trainingset,Validation set After splitting and processing of data, the most important step is learning from the training data using a Nvidia ConvolutionalNeuralNetwork(CNN)model,thismodelhelps in accurate mapping of the raw pixels from the images collected to the steering angle recorded in the driving log. The CNN helps in learning the data automatically based on the images fed into the Convolutional Neural Network and model is tuned by varying the hyper parameters values and adjusting the weights. The Nvidia network architecture used consist of 9 layers out of which 5 are Convolutional layers and 4 fully connected layers. The size of the input image is 66 x 200 x 3(Height x width x Depth). Figure -14: Network Parameters The optimizer used in the network is ‘Adam’ andthelearning rate is kept at . The activation function used for this model is elu (exponential linear unit) Figure -15: Model Characteristic
  • 6. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072 © 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 6 Figure -16: CNN Architecture (Mariusz Bojarsk, 2016) 5. TESTING IN AUTONOMOUS MODE Once the model is completely tuned and has all hyper parameters set to a valid value.Themodelisexecutedfor592 epochs and each epoch consists of 400 samples. The final validation loss is approximately 0.035. Figure -17: Testing on a new track-Image1 Figure -18: Testing on a new Track-Image 2 Figure -19: Testing on a new Track-Image 3 The model.h5 file is generated. H5 file is a types of file format which is used to store structure data, in our case this consist of the tuned model which will be feed back into the Udacity simulator using server-client script for driving the vehicle in autonomous mode. The vehicle is driven on the track on which it is trained and on a totally new track which is a testing track as the vehicle is totally unawareoftheturnsand corners of the track. The vehicle performs successfully on both the tracks withless deviation from the centerofthelane and avoiding collision throughout the journey. CONCLUSIONS The Self-driving carsaretheupcomingtechnologyandwillbe the most successful in today’s world with fuel prices hiking and pollution is increasing each day.Theself-drivingcarscan help in not only cost-cutting but also reduce a lot of pollution from the surroundings as they are mostly hybrid or fully electric vehicles. As a future work, we will be working on infusing object detecting features so that the vehicle can operate on the real-world environment with proper control and speed variation based on the objects and surroundings.
  • 7. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 08 Issue: 04 | Apr 2021 www.irjet.net p-ISSN: 2395-0072 © 2021, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 7 REFERENCES [1] Bhaskar Barua, C. G. (2019). A Self-Driving Car Implementation Using Computer Vision for Detection and Navigation. IEEE. [2] Juntae Kim, G. Y. (n.d.). Deep Learning Algorithm using virtual Environment data for Self-Driving Car. IEEE. [3] Naoki.(2017).https://naokishibuya.medium.com/introd uction-to-udacity-self-driving-car-simulator- 4d78198d301d. Retrieved from https://naokishibuya.medium.com/. [4] Straub, J. (2017). Automated Testing of a Self-Driving Vehicle System. IEEE. [5] Truong-Dong do, M. T.-V.-H. (2018). Real-Time Self- Driving Car Navigation Using Deep Neural Network. IEEE. [6] Mariusz Bojarsk, D. D. (2016). End to End Learning for Self-Driving Cars. arxiv.org [7] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation, “in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587. [8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, "Decaf: A deep Convolutional activation feature for generic visual recognition," in International conference on machine learning,2014, pp. 647-655. [9] L. Fei-Fei, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories,” Computer vision Image understanding, vol. 106, no. 1, pp. 59-70, 2007. [10] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, "Sun database: Large-scale scene recognition from abbey to zoo," in Computer vision and pattern recognition (CVPR), 2010 IEEE conference on, 2010, pp. 3485-3492. [11] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587. [12] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep Convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.conference on, 2010, pp. 3485-3492. [13] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, "Decaf: A deep Convolutional activation feature for generic visual recognition," in International conference on machine learning, 2014, pp. 647-655. [14] https://www.udemy.com/course/applied-deep- learningtm-the-complete-self-driving-car-course/