SlideShare a Scribd company logo
IAES International Journal of Artificial Intelligence (IJ-AI)
Vol. 13, No. 3, September 2024, pp. 2603~2613
ISSN: 2252-8938, DOI: 10.11591/ijai.v13.i3.pp2603-2613  2603
Journal homepage: http://ijai.iaescore.com
Design and implementation of a driving safety assistant system
based on driver behavior
Adil Salbi1
, Mohamed Amine Gadi1
, Tarik Bouganssa2
, Abdelhadi Eloudrhiri Hassani1
,
Abdelali Lasfar1
1
LASTIMI laboratory, High School of Technology of Salé, Mohammed V University of Rabat, Salé, Morocco
2
Smart Systems Laboratory, ENSIAS Mohammed V University of Rabat, Salé, Morocco
Article Info ABSTRACT
Article history:
Received Nov 8, 2023
Revised Feb 19, 2024
Accepted Feb 28, 2024
These days, road accidents are one of Morocco's biggest problems. Fatigue,
drowsiness, and driver behavior are among the primary causes. This research
aims to develop an embedded system by image processing and computer
vision to ensure driving safety by monitoring driver behavior and assist
drivers to awaken from micro-sleep or fatigue due to long driving hours and
various other reasons. Indeed, the driver inattention, drowsiness or driver
fatigue can be detected. The suggested method is designed to support drivers
if needed, based on the vehicle velocity. Once the driver crosses a certain
speed limit, the program starts face detection and analyzing this data to
determine whether the driver is tired, sleepy, or inattentive. This activates
different alarm depending on the criticality level. It can sound a voice alert
to help him wake up and drive more cautiously. The system is based on
artificial intelligence algorithms in image processing based on OpenCV
libraries and the Python language to capture the movements of the driver's
eyes and head when starting the automobile. Every algorithm is run on a
Raspberry-Pi 4 card, and numerous experimentation series have
demonstrated overall credible performance with success accuracy of over
93% in eye aspect ratio (EAR) and mouth aspect ratio (MAR) calculations.
Keywords:
Driver behavior
Driving safety
Drowsiness
Embedded system
Image processing
OpenCV
This is an open access article under the CC BY-SA license.
Corresponding Author:
Mohamed Amine Gadi
LASTIMI laboratory, High School of Technology of Salé, Mohammed V University of Rabat
Avenue Prince Héritier, BP 227 Salé Médina, Salé, Morocco
Email: gadi.muhamedamine@gmail.com
1. INTRODUCTION
In the world, road accidents represent a critical issue, with alarming statistics underscoring the
urgent need for effective interventions to enhance road safety. According to recent surveys conducted by the
Vinci autoroutes foundation in Morocco, falling asleep at the wheel is the leading cause of mortality on
highways, tripling to quadrupling the risk of accidents within thirty minutes after the onset of drowsiness
related to fatigue. The severity of the situation is further highlighted by the fact that a vehicle in Morocco is
18.2 times more likely to be involved in a fatal accident than in Sweden and 13.5 times more than in France
[1]. Further studies have shown that 20 to 30% of accidents involving professional drivers are related to
vigilance disorders, with 39% of highway accidents due to drowsiness and 25% caused by a decrease in
alertness due to falling asleep at the wheel. To underscore the gravity of this issue, Figure 1 at the bottom
illustrates the comparison between Folkway and Fastway in terms of the primary causes of fatal accidents,
derived from a survey executed as part of the 2018 Moroccan barometer of responsible driving project. This
comparison elucidates the specific areas requiring urgent attention for intervention.
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 3, September 2024: 2603-2613
2604
Figure 1. Primary causes of fatal accidents according to Moroccan drivers
Addressing this dire reality, our study introduces a driver assistance safety system based on driver
behavior, utilizing image processing and computer vision to detect fatigue and drowsiness. This approach
represents a novel and effective solution to monitor driver behavior in real-time, thereby reducing the risk of
fatigue-related accidents. The article details the design, implementation, and evaluation of our system,
providing a comprehensive overview of the methodology employed, the experiments conducted, and the
analysis of the results obtained which success accuracy rate exceeding 97%. Through this study, we aim to
make a significant contribution to road safety in Morocco and beyond by offering an innovative technology
capable of preventing accidents caused by driver fatigue.
To achieve this, we have developed a comprehensive system that integrates advanced image
processing algorithms and machine learning techniques to accurately detect early signs of driver fatigue and
drowsiness. By deploying real-time monitoring and alert mechanisms, our system provides timely warnings
to drivers, potentially saving lives and reducing the incidence of road accidents. This research not only
addresses a critical safety issue but also sets a precedent for future studies in the field, paving the way for the
development of more sophisticated driver assistance technologies.
2. RELATED WORK
To mitigate the precarious situation of road accidents caused by drowsiness, fatigue, or even lack of
vigilance, researchers have considered several approaches to detecting these factors. The drowsiness
detection landscape comprises two main categories of techniques. The first category focuses on driver
performance, monitoring the driver's actions for signs of drowsiness [2], [3]. Steering wheel jolts and the
standard deviation of lateral position (SDLP) are employed to detect drowsiness. While steering wheel jolts
look for erratic movements, SDLP keeps an eye on the vehicle's lane position. The second category, known
as driver state monitoring technique, involves direct monitoring of the driver's physical state. This approach
utilizes image-processing methods to analyze factors such as eye closure, yawning, and head nodding.
Additionally, it incorporates physiological signal-based methods that track changes in heart rate or brain
activity as indicators of drowsiness [4]-[7]. By integrating these techniques, researchers can develop more
accurate and reliable drowsiness detection systems for enhanced driver safety.
2.1. Face detection
Detecting faces, especially those with visible eyes, is effortless for humans but poses a challenge for
machines. Indeed, machine learning plays a crucial role in extracting desired elements from complex
environments, taking into account factors such as scale, orientation, and lighting. Moreover, face detection
has diverse applications, including monitoring driver fatigue. In this context, researchers, led by
Kortli et al. [8], categorized facial detection methods into four groups. Firstly, model-based: this approach
compares face and facial part models with candidates using correlation functions. Almabdy and Elrefaei [9]
emphasized the importance of eyes but also noted potential accuracy issues and computational costs.
Secondly, invariant characteristics: focuses on detecting faces despite variations in pose, lighting, and angles.
Thirdly, facial characteristics: this method locates five key facial features to describe a typical face [10], [11].
Lastly, appearance and machine learning: quickly and accurately applies models acquired from a variety of
images.
Regarding object detection techniques, popular methods include the Viola-Jones algorithm
(Haar cascade) and the histogram of oriented gradients (HOG) by Dalal and Triggs [12]. Specifically, face
detection in real-time is one area where the Viola-Jones algorithm excels [13]. The process involves four
steps: classifier cascade design, AdaBoost training, image creation, and Haar-like feature selection. Notably,
due to their ease of use and quick computation time, Viola and Jones employ Haar features, sometimes
referred to as Haar-like features. Additionally, to speed up feature calculations, they make use of an idea
Int J Artif Intell ISSN: 2252-8938 
Design and implementation of a driving safety assistant system based on driver behavior (Adil Salbi)
2605
known as integral images. A significant invention by Viola and Jones is the integral image, which makes
feature calculations more efficient by reducing the frequency of summing pixel values.
As depicted in Figure 2 illustrates the Haar features, explaining how they are calculated and
highlighting their importance. Particularly, Figure 2(a) depicts the Haar-like features employed by the
Viola-Jones algorithm for face detection, showcasing simple rectangular patterns that assess contrasts
between different image regions, such as the eye area versus the cheek. These patterns use black and white
rectangles to represent positive and negative areas, thereby demonstrating the algorithm's capacity to detect
faces by contrasting light and dark regions. Simultaneously, Figure 2(b) presents a facial image as a typical
input for feature detection. This illustration clarifies the algorithm's objective: to identify invariant facial
characteristics despite variations in expression and orientation, thus emphasizing the role of Haar-like
features in achieving accurate face detection. However, Figure 2(c) introduces a novel representation, the
integral image, conceptualized by Viola-Jones. This integral image, of the same size as the original, contains
at each point the sum of the pixels located above and to its left, aiming to streamline the feature calculation
process. The idea is to calculate the sum of all pixels in the image just once, meaning the pixel at position
(x, y) in the full image is the sum of the pixel values above and to the left of (x, y), significantly enhancing
computational efficiency [12], [13]. As shown in Figure 2(c), the integral image at the yellow pixel (20)
equals the sum of all blue pixels (1+3+12+4), where “imgintg
(x,y)” is the integral image at position (x, y),
representing the sum of all pixels in the original image, “imgsource
(x',y')”, up to and including (x, y). The
general formula is articulated by (1):
imgintg
(x,y) = ∑ imgsource
x'≤x,y'≤𝑦 (x',y') (1)
Sequentially applied, cascade classifiers make a pivotal decision: if a sub-region contains the object,
the operation is deemed complete and progresses to the next classifier; otherwise, it is rejected. According to
Viola and Jones, the vast majority of sub-regions that test negative (non-face/non-eyes) can be promptly
dismissed using classifiers at the beginning of the cascade, allowing more focus on positive (face/eyes)
inputs. However, the Haar cascade faces several limitations. Firstly, it may struggle to detect faces under
challenging lighting conditions, potentially resulting in high false detection rates or inadequate detection
rates. Additionally, Haar cascade may find it difficult to detect faces in images where faces are partially
obscured, such as those hidden behind glasses or hats. Finally, Haar cascade might face challenges in
detecting faces within images exhibiting large angle changes or complex facial expressions [14].
(a) (b) (c)
Figure 2. Illustration of Haar's characteristics for computational analysis of feature extraction and summation
in image processing: (a) Haar-like features for edge and line detection, (b) Haar feature application on facial
features, and (c) source to integral image conversion
On another front, the HOG method emerges as a robust technique in computer vision for object
detection within images. Proposed by researchers Dalal and Triggs in their seminal article "histograms of
oriented gradients for human detection" [15], the HOG method involves dividing the image into small cells
and calculating the gradient orientation of every pixel within each cell. Subsequently, a histogram of gradient
orientations is constructed for each block of cells, enabling the representation of the entire image by a vector
of HOG descriptors. This representation is pivotal for training classifiers, such as support vector machines
(SVM), to detect objects in new images. Crucially, converting the original image into a gray intensity image
is necessary to streamline the process [12], [15]. The essence of this approach lies in contrasting the current
pixel's darkness with its immediate surroundings, thereby delineating the direction towards increasing
darkness. This methodical process, albeit seemingly redundant, ensures that both dark and light areas are
uniformly represented based on the direction of brightness change.
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 3, September 2024: 2603-2613
2606
Furthermore, by analyzing the image in small 16×16-pixel squares, the flow of light/dark becomes
discernible at a macro level, revealing the image's fundamental pattern. In each square, the count of gradients
pointing in each principal direction is tallied. These squares are then annotated in the image with the
directions of dominant arrows. To locate faces within this HOG image, it suffices to identify parts of the
image that closely match a known HOG model extracted from training faces. Conversely, the system's ability
to capture faces oriented in various directions could misinterpret the absence of a face, and consequently, a
driver. To mitigate this limitation, each image is adjusted to maintain the eyes and lips in direct opposition.
This adjustment employs an algorithm known as "face landmark estimation," which identifies 68 specific
points, or "landmarks," on each face, such as the top of the chin and the edges of each eye. A neural network
is then trained to detect these 68 specific points on any face, enhancing the system's accuracy and versatility.
2.2. Eye status detection
Eye condition is a critical factor in determining whether a driver is tired, asleep, or in their normal
state. In essence, eye detection serves as an essential pillar for analyzing driver behavior. Several factors are
considered in this operation, including external noise and the diversity of eye appearance. The concept of
appearance diversity encompasses shape, size, color variation, and movement of the iris and eyelid, while
external noise factors include glasses, hair, shade, quality of imaging conditions, and environmental
conditions.
At the heart of this method is the extraction of geometric characteristics of the eye, such as the
visible iris and the elliptical shape of the eyelids, which forms the basis for distinguishing whether the eye is
open or closed. Furthermore, variations in intensity distribution also facilitate this determination through the
presence or absence of the iris and the white region of the eye in an image. Irrespective of the environmental
changes, curves representing the differences between opened and closed eyes are generated by the horizontal
and vertical projections applied to the eye region. This technique is crucial for determining the degree of eye
closure and aligning the detected region. These methods are adept at managing the various changes that eyes
can undergo, including the processing of low-quality images, making them more reliable than other methods.
The approach consists of two steps: extracting visual characteristics from the photometric aspect of the eyes
and their classification. Subsequently, the next section describes the materials and methods used to design
and implement a cost-effective automotive electronic embedded system. This system aims to reduce
accidents by monitoring drivers in critical situations, such as driving in states of fatigue and drowsiness or
driving with inattention on highways.
3. MATERIAL AND METHODS
3.1. Material setup
In this project, we required equipment capable of processing video sequences in real-time while
being compact enough for easy integration into a vehicle. There are several types of electronic cards able to
make image processing, but the Raspberry Pi 4 Model B was chosen for its superior hardware performance,
including a 64-bit quad core system-on-chip (SoC) processor at 1.5 GHz, 2 GB of RAM (expandable), and
various features such as Wi-Fi, Bluetooth, USB 3.0 ports, and 40-pin GPIO connectors. Its small size
(85×56×16 mm) and cost-effectiveness make it an ideal solution. The system is powered by a 5 V source,
such as a car's cigarette lighter. Figure 3 shows the functional diagram of the on-board system used for
detecting driving fatigue. This diagram illustrates how the Raspberry-Pi 4-B board, at the heart of the system,
is connected to the various components, including a camera for driver monitoring and an alert module to
report detected fatigue. This configuration demonstrates the integration of image processing and
communications technologies to create a proactive safety feature in the vehicle.
Figure 3. Functional system view
Int J Artif Intell ISSN: 2252-8938 
Design and implementation of a driving safety assistant system based on driver behavior (Adil Salbi)
2607
The device captures the driver's face in real-time using a camera as shown in the Figure 3. The
Raspberry's microprocessor analyzes fatigue or drowsiness through an algorithm Written using a Python
language and library OpenCV tools. When fatigue is detected, a visual alert is triggered, supplemented by an
audible alarm if necessary. Advanced SOS measures, such as activating hazard lights for informing other
road users or sending an emergency SMS with the vehicle's location, are implemented through the addition of
NEO-6M GPS and GSM modules.
3.2. Software setup
As previously noted, image processing for face detection can be efficiently executed using the
Python language, given the richness of its libraries in this field. A flow diagram, depicted in Figure 4,
illustrates the overall operations of the system. Specifically, Figure 4(a) presents a flow diagram describing
the system's comprehensive process. This process starts by capturing the driver's face in real time through a
camera connected to the Raspberry Pi board. The captured images are then processed by a Python algorithm
utilizing OpenCV libraries for detecting face and facial expressions indicative of fatigue or drowsiness, such
as closing the eyes or yawning. Furthermore, this diagram elucidates the successive steps of acquiring visual
data, analyzing it to identify signs of fatigue, and the immediate actions taken by the system in response to
these signs, including activating alarms.
Moreover, Figure 4(b) specifically illustrates the alarm triggering mechanism when the system
detects the absence of the driver's face or signs of drowsiness. This flowchart delineates how the system
evaluates risk conditions based on the analysis of facial images and, upon detecting fatigue, how it initiates
emergency measures such as visual and audible alarms to awaken the driver and alert passengers. It also
emphasizes the importance of the system's reactivity to signs of fatigue, showcasing the sequence of actions
triggered to minimize the risk of an accident and highlighting the system's capacity to actively intervene in
the event of potential danger, thereby underlining the preventive nature of the developed system.
(a) (b)
Figure 4. System algorithms flowchart: (a) overall system process and (b) driver behavior monitoring
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 3, September 2024: 2603-2613
2608
A variety of Python tools and libraries are utilized in extracting facial features; such as eye
condition. These libraries offer powerful capabilities for processing, manipulating, and analyzing images, as
well as for creating machine learning applications in computer vision to extract facial features for detecting
sleepiness in drivers. Among these libraries, notable ones include OpenCV, Dlib, Imutils, Scipy.spatial, and
Pygame. OpenCV is renowned for its real-time image and video processing capabilities. It finds application
in numerous fields, including facial recognition, video surveillance, and machine vision [16]. OpenCV boasts
both low and high-level application programming interface (APIs) and facilitates parallel programming.
According to its documentation, the library provides various functionalities, such as image manipulation
(loading, saving, and copying), video acquisition, image processing, image analysis, and shape recognition.
OpenCV is instrumental in capturing real-time video from video cameras, marking the initial data acquisition
step. This stage aims to transform video recordings into images, which then form the foundation for
subsequent analysis stages. Furthermore, OpenCV supports eye and face detection using the Viola-Jones
(Haar cascade) method discussed in the first section. Figures 5(a) to 5(c) demonstrates the outcome of this
process. Specifically, Figure 5(b) details the detection and analysis of the eye region to calculate the eye
aspect ratio (EAR), a crucial metric for eye closure. Similarly, Figure 5(c) focuses on the mouth region to
calculate the mouth aspect ratio (MAR), which is instrumental in detecting yawns, another fatigue indicator.
Meanwhile, Figure 5(a) illustrates the facial feature points employed in drowsiness detection, with a focus on
the eye region.
(a) (b) (c)
Figure 5. Detection of key facial features by characteristic points of eye region and mouth region;
(a) detection of facial regions: eyes and mouth, (b) characteristic points of the eye region, and
(c) characteristic points of the mouth region
Dlib stands out with its graphics library that facilitates image processing [17]-[19]. The accuracy
provided by Dlib, which employs the HOG method, surpasses that of OpenCV, which relies on the Haar
cascade technique. In the design of this project, the Dlib library will be employed to detect faces and their
characteristic points, marking the characterization step. This step aims to highlight significant facial
expression features while eliminating redundant data. Dlib excels in estimating the location of 68 (x, y)
coordinates that map facial points on a person's face, with these points stored in an indexed array. The
implementation of the Dlib library specifically aims to identify the characteristic points of the eyes and the
mouth to calculate the Euclidean distance, thereby determining the distance between two points. This
calculation, when compared to a predetermined threshold, will enable the assessment of whether the driver is
asleep, tired, or alert. In Dlib, 6 coordinate points (x, y) represent each eye, and 12 other points represent the
mouth, as illustrated in Figure 5 [20]-[22]. It showcases these points that are identified from a pre-trained
model [23]-[26], thereby underlining the project's emphasis on precision and effectiveness in monitoring
driver alertness.
3.2.1. Eye aspect ratio
The EAR serves as a measure to describe and represent the characteristic points of the eyes. This
measurement, indicating the ratio between the width and the height of the eye, is pivotal in determining the
level of eye openness. The higher this value, the more open the eye is considered. Thus, it significantly aids
in characterizing a base of examples (images) to discover a learning model that can predict the state of a new
instance. The EAR value is calculated using (2):
𝐸𝐴𝑅 =
𝐴𝑒+𝐵𝑒
2∗𝐶𝑒
(2)
Where Ae is the distance between eye points (1) and (5) in the Figure 5(b), Be is the distance between points
(2) and (4), and Ce is the distance between points (0) and (3). A specific threshold, termed the ocular
Int J Artif Intell ISSN: 2252-8938 
Design and implementation of a driving safety assistant system based on driver behavior (Adil Salbi)
2609
threshold (Te), is essential to assess whether the driver is asleep based on the EAR level. If the EAR value is
below this threshold, the eye is considered closed. Typically, the threshold is set at Te=0.25.
3.2.2. Mouth aspect ratio
Yawning, as an indicator of fatigue due to its role in increasing oxygen intake and reducing carbon
dioxide levels in the blood, is similarly measured by the MAR. This ratio elucidates the height to width ratio
of the mouth, helping to ascertain whether the mouth is open (indicating yawning) or closed [22]. The MAR
value is derived from (3):
𝑀𝐴𝑅 =
𝐴𝑚+𝐵𝑚
2∗𝐶𝑚
(3)
Where Am is the distance between mouth points (2) and (10), in the Figure 5(c), Bm between points (4) and
(8), and Cm between points (0) and (6). A mouth threshold (Tm) defines whether the mouth is considered
open in a yawning state based on the MAR value. If the MAR value exceeds this threshold, the mouth is
deemed to be yawning. Generally, this threshold is Tm=0.5.
The Dlib library's official documentation showcases its features, including face extraction and
feature point analysis. In this research, the Scipy.spatial sub-library is utilized for calculating Euclidean
distances, facilitating complex geometric calculations required for measuring the eye-opening ratios EAR
and MAR, and for detecting eye blinks. Moreover, the Imutils sub-library, with its "face_utils" function,
provides utilities for processing facial features, such as detecting and extracting facial landmarks from
images. This technique, in conjunction with the Dlib library, aids in detecting facial landmarks from input
videos. Ultimately, using this module simplifies the extraction of facial features, such as key points e.g., eyes,
nose, and mouth, from a face image. Additionally, the Pygame library, suitable for playing sounds, employed
here to issue an audio alert.
4. RESULTS AND DISCUSSIONS
After installing the Raspberry Pi operating system (Raspbian), OpenCV, and the necessary libraries,
the system based on the Raspberry Pi 4 board and equipped with various sensors (camera, gyroscope, and
GPS) and actuators (LED, speaker, and SOS system), as previously mentioned in section 2 was integrated
into a vehicle for testing. These initial tests aimed to refine facial recognition under various conditions.
Figure 6 comprehensively illustrates the system onboard implementation and driver behavior detection
capabilities.
The Figure 6(a) depicts the initial setup of the system in the vehicle, showcasing the Raspberry Pi
with its connected components, prepared for real-world testing. Figure 6(b) captures a moment when the
system successfully distinguishes the driver from passengers, thereby demonstrating its capability to
accurately identify the vehicle's operator. Subsequently, Figures 6(c) and 6(d) display the system assessing
the driver's head orientation (left and right). These observations highlight the system's ability to detect
potential inattention when the driver's gaze deviates from the road, prompting a visual alert. Figure 6(e)
presents the user interface issuing a fatigue alert, based on the EAR calculation, which assesses the frequency
of eye blinking to evaluate the driver’s alertness. If this posture persists for more than 3 seconds and the
vehicle's speed exceeds a certain threshold (e.g., 60 km/h), a visual and an audible alarm are activated to
caution the driver. In Figure 6(f), the system detects drowsiness through the analysis of the MAR, monitoring
yawns or other mouth movements indicative of fatigue. Particularly noteworthy is Figure 6(g), which
underscores the system's effectiveness under nighttime driving conditions by demonstrating its capability to
identify drowsiness even in low-light scenarios, vital for enhancing driving safety during the night. Each of
these instances attests to the system's advanced and responsive nature in various driving scenarios,
underscoring its potential to augment driver safety by alerting to varying degrees of inattention and fatigue as
like in the Figure 7.
Figures 7(a) and 7(b) depict a critical situation where the vehicle is in motion, yet the driver's face is
not detected, suggesting unconsciousness or severe distraction. In such cases, the system triggers an
emergency advanced driver-assistance systems (ADAS) warning, as described previously in section 2. This
function is crucial for initiating immediate corrective measures to avert potential accidents due to driver
incapacitation. Conversely, Figures 7(c) and 7(d) illustrate situations where the vehicle is stationary, and no
face is detected. Here, the absence of detection does not immediately necessitate an emergency alert; instead,
the system diligently attempts to locate the driver's face, ensuring readiness for when the vehicle commences
movement. This proactive approach by the system highlights its sophistication and its pivotal role in
continuously safeguarding driver awareness and safety. Each scenario in Figure 7 emphasizes the system’s
adaptability to various conditions and its essential role in monitoring driver consciousness to improve road
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 3, September 2024: 2603-2613
2610
safety. It is also crucial to note that the camera is capable of capturing faces within a 140-degree angle,
significantly ensuring that the driver's face remains within the field of view under most circumstances, thus
allowing for reliable monitoring and prompt system responses to various driver states. In Table 1, we detail
various scenarios and describe how the final system responds by generating alerts for different driver states.
(a) (b) (c)
(d) (e) (f) (g)
Figure 6. Implementation and results of different driver behavior detection: (a) driver monitoring setup with
Raspberry Pi and camera, (b) normal driving detected, (c) driver looking to the right, (d) driver looking to the
left, (e) yawning detected, (f) mouth open, possible drowsiness detected, and (g) night driving detected
(a) (b) (c) (d)
Figure 7. Other situation of driving behavior detection: (a) in-motion no face detected, triggering emergency
ADAS warning, (b) moving driver unresponsive, activating emergency ADAS alert, (c) stationary no face
detected, no alarm, and (d) stopped driver absent, no alarm
Table 1. Various alertness level for different drowsiness degrees
EAR test MAR test Face direction test Alert type
True True True No alert
True False True Tiredness alert
False True True Cautionary alert
True True False Warning alert
True False False Warning alert
False True False Warning alert
False False True Warning alert
False False False Emergency alert
Notably, the system categorizes drowsiness into four distinct levels of alert, each corresponding to a
different degree of drowsiness severity: fatigue alert, cautionary alert, warning alert, and emergency alert.
Int J Artif Intell ISSN: 2252-8938 
Design and implementation of a driving safety assistant system based on driver behavior (Adil Salbi)
2611
These categories help in tailoring the system’s response to the specific needs posed by the driver's condition,
ensuring both safety and appropriateness of the alert issued. The alert levels are defined as follows:
 Fatigue alert: this level is triggered when the system detects signs of mild drowsiness in the driver. It
serves as an early indication that the driver may need to take a rest or a break, helping to prevent the
escalation of fatigue to more dangerous levels.
 Cautionary alert: it is activated at a moderate level of drowsiness. It indicates that the risks associated
with continued driving while fatigued are increasing and that the driver should consider stopping.
 Warning alert: it is issued when the system identifies severe drowsiness, this alert level represents a
heightened danger to safety. It serves as a pronounced caution to the driver, urging them to take
immediate action to have a rest before continuing to drive.
 Emergency alert: this is the most critical level of alert; it is generated in cases of extreme drowsiness or
unconsciousness. The system's emergency response could involve activating additional safety measures
or alerts to prompt immediate driver or passenger intervention.
The structured approach to categorizing driver drowsiness and issuing corresponding alerts plays a
pivotal role in enhancing road safety. By precisely assessing the severity of the driver's condition and
responding with a tailored alert level, the system endeavors to reduce the risks associated with drowsy
driving, thereby potentially saving lives, and preventing accidents. This sophisticated response mechanism in
Table 1 facilitates a more effective and context-aware strategy for managing driver alertness, ultimately
contributing to the safety of all road users.
For each distinct scenario, the algorithm evaluates the driver's drowsiness based on tests like EAR,
MAR, and gaze direction, subsequently determining the most suitable alert level to issue. Despite its
promising outcomes, our drowsiness detection system encounters certain limitations. Primarily, the accuracy
of detection is significantly influenced by the vehicle's interior lighting conditions. In dimly lit environments,
standard cameras might not accurately capture the driver's eyes, potentially leading to incorrect assessments.
Therefore, the use of a camera equipped with a night vision mode is advised to overcome this challenge.
Additionally, the system exhibits noticeable delays in processing the algorithms, particularly when
integrating other SOS alert functionalities. Given that the Python algorithms operate within the Raspbian
operating system on the Raspberry Pi board, this delay is largely attributed to the extensive sequential
processing demanded by the board's microprocessor. Addressing this issue could involve opting for a board
with superior specifications, such as the Raspberry Pi 5, or transitioning to parallel processing technologies
like field programmable gate array (FPGA). FPGA technology marks a novel research avenue in image
processing due to its capability to execute algorithms in a hardware description format with parallel
processing capabilities [27]. Such advancements are expected to significantly enhance real-time performance
within the ADAS framework, thereby offering a more responsive and efficient system for monitoring driver
alertness and ensuring road safety.
5. CONCLUSION
The ADAS systems using artificial intelligence represent a significant advance in the field of road
safety and public health. The results of this study demonstrate the effectiveness of algorithm models
integrated into an embedded system such as the Raspberry Pi card to ensure continuous monitoring of driver
behavior and precise recognition of signs of drowsiness, thus offering promising prospects for the
development early detection systems. These technological advances could help reduce accidents related to
fatigue while driving, thereby improving the safety of drivers and passengers. However, challenges remain,
particularly regarding the adaptability of the models to various driving contexts and the handling of false
positives. Future research should focus on continued algorithm optimization, real-time data integration, and
large-scale validation across diverse populations. In conclusion, the use of artificial intelligence for driver
behavior monitoring has considerable potential to improve road safety. However, it is imperative to continue
research and development efforts in order to guarantee the reliability, precision, and generalization of these
systems in varied conditions, thus contributing to creating a safer road environment and preventing accidents
linked to traffic, fatigue or inattention while driving.
REFERENCES
[1] H. Khyara, A. Amine, and B. Nassih, “Road traffic accidents in Morocco: exploratory analysis of driver, vehicle, and pedestrian
factors,” SN Computer Science, vol. 4, no. 2, 2023, doi: 10.1007/s42979-022-01501-6.
[2] G. Sikander and S. Anwar, “Driver fatigue detection systems: a review,” IEEE Transactions on Intelligent Transportation
Systems, vol. 20, no. 6, pp. 2339–2352, Jun. 2019, doi: 10.1109/TITS.2018.2868499.
[3] M. A. Abu, I. D. Ishak, H. Basarudin, A. F. Ramli, and M. I. Shapiai, “Fatigue and drows-iness detection system using artificial
intelligence technique for car drivers,” Advanced Structured Materials, vol. 167, pp. 421–430, 2022, doi: 10.1007/978-3-030-
89988-2_31.
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 3, September 2024: 2603-2613
2612
[4] M. Alnaggar, A. I. Siam, M. Handosa, T. Medhat, and M. Z. Rashad, “Video-based real-time monitoring for heart rate and
respiration rate,” Expert Systems with Applications, vol. 225, Sep. 2023, doi: 10.1016/j.eswa.2023.120135.
[5] R. P. Balandong, R. F. Ahmad, M. N. M. Saad, and A. S. Malik, “A review on EEG-based automatic sleepiness detection systems
for driver,” IEEE Access, vol. 6, pp. 22908–22919, 2018, doi: 10.1109/ACCESS.2018.2811723.
[6] L. Jin, Q. Niu, Y. Jiang, H. Xian, Y. Qin, and M. Xu, “Driver sleepiness detection system based on eye movements variables,”
Advances in Mechanical Engineering, vol. 2013, 2013, doi: 10.1155/2013/648431.
[7] N. Alioua, A. Amine, A. Rogozan, A. Bensrhair, and M. Rziza, “Driver head pose estimation using efficient descriptor fusion,”
Eurasip Journal on Image and Video Processing, vol. 2016, no. 1, 2016, doi: 10.1186/s13640-016-0103-z.
[8] Y. Kortli, M. Jridi, A. A. Falou, and M. Atri, “Face recognition systems: A survey,” Sensors, vol. 20, no. 2, Jan. 2020, doi:
10.3390/s20020342.
[9] S. Almabdy and L. Elrefaei, “Deep convolutional neural network-based approaches for face recognition,” Applied Sciences, vol.
9, no. 20, Oct. 2019, doi: 10.3390/app9204397.
[10] S. Saleem, J. Shiney, B. P. Shan, and V. K. Mishra, “Face recognition using facial features,” Materials Today: Proceedings, vol.
80, pp. 3857–3862, 2023, doi: 10.1016/j.matpr.2021.07.402.
[11] M. K. Hasan, M. S. Ahsan, A.-A. -Mamun, S. H. S. Newaz, and G. M. Lee, “Human face detection techniques: A comprehensive
review and future research directions,” Electronics, vol. 10, no. 19, Sep. 2021, doi: 10.3390/electronics10192354.
[12] C. Rahmad, R. A. Asmara, D. R. H. Putra, I. Dharma, H. Darmono, and I. Muhiqqin, “Comparison of Viola-Jones Haar cascade
classifier and histogram of oriented gradients (HOG) for face detection,” in IOP Conference Series: Materials Science and
Engineering, Jan. 2020, doi: 10.1088/1757-899X/732/1/012038.
[13] A. A. Elngar, M. Arafa, A. E. R. A. Naeem, A. R. Essa, and Z. A. Shaaban, “The Viola-Jones face detection algorithm analysis: A
survey,” Journal of Cybersecurity and Information Management, pp. 85–95, 2021, doi: 10.54216/JCIM.060201.
[14] M. Rezaei and R. Klette, “Adaptive Haar-like classifier for eye status detection under non-ideal lighting conditions,” in
Proceedings of the 27th Conference on Image and Vision Computing New Zealand, New York, NY, USA: ACM, Nov. 2012, pp.
521–526. doi: 10.1145/2425836.2425934.
[15] C. I. Patel, D. Labana, S. Pandya, K. Modi, H. Ghayvat, and M. Awais, “Histogram of oriented gradient-based fusion of features
for human action recognition in action video sequences,” Sensors, vol. 20, no. 24, Dec. 2020, doi: 10.3390/s20247299.
[16] M. Khan, S. Chakraborty, R. Astya, and S. Khepra, “Face detection and recognition using OpenCV,” in 2019 International
Conference on Computing, Communication, and Intelligent Systems (ICCCIS), IEEE, Oct. 2019, pp. 116–119. doi:
10.1109/ICCCIS48478.2019.8974493.
[17] D. E. King, “Dlib-ml: a machine learning toolkit,” Journal of Machine Learning Research, vol. 10, pp. 1755–1758, 2009.
[18] I. Rakhmatulin and A. T. Duchowski, “Deep neural networks for low-cost eye tracking,” Procedia Computer Science, vol. 176,
pp. 685–694, 2020, doi: 10.1016/j.procs.2020.09.041.
[19] J. H. Kim, B. G. Kim, P. P. Roy, and D. M. Jeong, “Efficient facial expression recognition algorithm based on hierarchical deep
neural network structure,” IEEE Access, vol. 7, pp. 41273–41285, 2019, doi: 10.1109/ACCESS.2019.2907327.
[20] M. A. N. Reza, E. A. Z. Hamidi, N. Ismail, M. R. Effendi, E. Mulyana, and W. Shalannanda, “Design a landmark facial-based
drowsiness detection using Dlib and OpenCV for four-wheeled vehicle drivers,” in Proceeding of 15th International Conference
on Telecommunication Systems, Services, and Applications, TSSA 2021, 2021, pp. 1–5. doi: 10.1109/TSSA52866.2021.9768278.
[21] A. N. Younis and F. M. Ramo, “Developing Viola Jones’ algorithm for detecting and tracking a human face in video file,” IAES
International Journal of Artificial Intelligence (IJ-AI), vol. 12, no. 4, pp. 1603–1610, Dec. 2023, doi:
10.11591/ijai.v12.i4.pp1603-1610.
[22] S. Mohanty, S. V. Hegde, S. Prasad, and J. Manikandan, “Design of real-time drowsiness detection system using dlib,” in 2019
5th IEEE International WIE Conference on Electrical and Computer Engineering, WIECON-ECE 2019 - Proceedings, IEEE,
Nov. 2019, pp. 1–4. doi: 10.1109/WIECON-ECE48653.2019.9019910.
[23] C. Dewi, R.-C. Chen, X. Jiang, and H. Yu, “Adjusting eye aspect ratio for strong eye blink detection based on facial landmarks,”
PeerJ Computer Science, vol. 8, p. e943, Apr. 2022, doi: 10.7717/peerj-cs.943.
[24] A. B. Shetty, Bhoomika, Deeksha, J. Rebeiro, and Ramyashree, “Facial recognition using Haar cascade and LBP classifiers,”
Global Transitions Proceedings, vol. 2, no. 2, pp. 330–335, Nov. 2021, doi: 10.1016/j.gltp.2021.08.044.
[25] H. Benradi, A. Chater, and A. Lasfar, “A hybrid approach for face recognition using a convolutional neural network combined
with feature extraction techniques,” IAES International Journal of Artificial Intelligence (IJ-AI), vol. 12, no. 2, pp. 627–640, Jun.
2023, doi: 10.11591/ijai.v12.i2.pp627-640.
[26] T. Faisal, I. Negassi, G. Goitom, M. Yassin, A. Bashir, and M. Awawdeh, “Systematic development of real-time driver
drowsiness detection system using deep learning,” IAES International Journal of Artificial Intelligence (IJ-AI), vol. 11, no. 1, pp.
148–160, Mar. 2022, doi: 10.11591/ijai.v11.i1.pp148-160.
[27] I. Bouganssa, M. Sbihi, and M. Zaim, “Laplacian edge detection algorithm for road signal images and FPGA implementation,”
International Journal of Machine Learning and Computing, vol. 9, no. 1, pp. 57–61, 2019, doi: 10.18178/ijmlc.2019.9.1.765.
BIOGRAPHIES OF AUTHORS
Adil Salbi was born in Errachidia, Morocco. He defended his thesis in 2017 in
electrical engineering and embedded systems. He is an assistant professor at the Higher
School of Technology of Salé (ESTS) at Mohammed V University Morocco and a member
of the LASTIMI laboratory (EMI). His research interests include embedded systems and
industrial computing as well as the concept of connected objects (IoT) and artificial
intelligence. He can be contacted at email: adilsalbi00@gmail.com.
Int J Artif Intell ISSN: 2252-8938 
Design and implementation of a driving safety assistant system based on driver behavior (Adil Salbi)
2613
Mohamed Amine Gadi holds both Engineer and Master degrees in computer
science and AI since 2021, and now he is Ph.D. student at LASTIMI laboratory in
Mohammed V University of Rabat, Morocco. His research focuses on artifiacial
inteligence applied on biomedcal and automotive systems. He can be contacted at email:
gadi.muhamedamine@gmail.com.
Tarik Bouganssa holds a Master's degree in AI from ENSIAS since 2022,
and now he is a Ph.D. student in Smart Systems Laboratory at ENSIAS, Mohammed V
University of Rabat in Morocco. He can be contacted at email:
tarik.bouganssa@gmail.com.
Abdelhadi Eloudrhiri Hassani is a Professor at the Higher School of
Technology of Salé (ESTS) at Mohammed V University in Rabat since 2023. His research
interests include embedded systems, networking and communications protocoles in the
industrial connected objects, and artificial intelligence. He can be contacted at email:
eloudrhiri.abdelhadi@gmail.com.
Abdelali Lasfar is a Professor of Higher Education at Mohammed V Agdal
University, Salé Higher School of Technology, Morocco. His research focuses on
compression methods, indexing by image content and image processing at the LASTIMI
Laboratory. He can be contacted at email: abdelali.lasfar@est.um5.ac.ma.

More Related Content

PDF
Real time detecting driver’s drowsiness using computer vision
PDF
Image processing based eye detection methods a theoretical review
PDF
Towards a system for real-time prevention of drowsiness-related accidents
PDF
Effective driver distraction warning system incorporating fast image recognit...
PDF
Real-Time Fatigue Analysis of Driver through Iris Recognition
PDF
DRIVER DROWSINESS ALERT SYSTEM
PPTX
How to publish a proper research paper in data science field
PDF
Automated Framework for Vision based Driver Fatigue Detection by using Multi-...
Real time detecting driver’s drowsiness using computer vision
Image processing based eye detection methods a theoretical review
Towards a system for real-time prevention of drowsiness-related accidents
Effective driver distraction warning system incorporating fast image recognit...
Real-Time Fatigue Analysis of Driver through Iris Recognition
DRIVER DROWSINESS ALERT SYSTEM
How to publish a proper research paper in data science field
Automated Framework for Vision based Driver Fatigue Detection by using Multi-...

Similar to Design and implementation of a driving safety assistant system based on driver behavior (20)

PDF
IRJET- An Effective System to Detect Face Drowsiness Status using Local F...
PDF
Real-Time Driver Drowsiness Detection System
PDF
IRJET- Vehicular Data Acquisition & Driver Behaviours Surveillance System
PDF
Driver Drowsiness and Alert System using Image Processing & IoT
PDF
A Real-Time Monitoring System for the Drivers using PCA and SVM
PDF
IRJET- Driver’s Sleep Detection
PDF
REAL TIME DROWSINESS DETECTION
PDF
A Proposed Accident Preventive Model For Smart Vehicles
PDF
IRJET- Robust Visual Analysis of Eye State
PDF
Driver Dormant Monitoring System to Avert Fatal Accidents Using Image Processing
PDF
Driver Dormant Monitoring System to Avert Fatal Accidents Using Image Processing
PDF
Human Driver’s Drowsiness Detection System
PDF
A Real Time Intelligent Driver Fatigue Alarm System Based On Video Sequences
PDF
REAL TIME DROWSY DRIVER DETECTION USING HAARCASCADE SAMPLES
PDF
Ieeepro techno solutions 2013 ieee embedded project driving safety monitoring
PDF
IRJET- Smart Vehicle Accident Prevention System
PDF
Scaling effectivity in manifold methodologies to detect driver’s fatigueness ...
PDF
IRJET- Implementation on Visual Analysis of Eye State using Image Process...
PDF
ROAD SAFETY BY DETECTING DROWSINESS AND ACCIDENT USING MACHINE LEARNING
PDF
Driver Drowsiness Detection Using Matlab
IRJET- An Effective System to Detect Face Drowsiness Status using Local F...
Real-Time Driver Drowsiness Detection System
IRJET- Vehicular Data Acquisition & Driver Behaviours Surveillance System
Driver Drowsiness and Alert System using Image Processing & IoT
A Real-Time Monitoring System for the Drivers using PCA and SVM
IRJET- Driver’s Sleep Detection
REAL TIME DROWSINESS DETECTION
A Proposed Accident Preventive Model For Smart Vehicles
IRJET- Robust Visual Analysis of Eye State
Driver Dormant Monitoring System to Avert Fatal Accidents Using Image Processing
Driver Dormant Monitoring System to Avert Fatal Accidents Using Image Processing
Human Driver’s Drowsiness Detection System
A Real Time Intelligent Driver Fatigue Alarm System Based On Video Sequences
REAL TIME DROWSY DRIVER DETECTION USING HAARCASCADE SAMPLES
Ieeepro techno solutions 2013 ieee embedded project driving safety monitoring
IRJET- Smart Vehicle Accident Prevention System
Scaling effectivity in manifold methodologies to detect driver’s fatigueness ...
IRJET- Implementation on Visual Analysis of Eye State using Image Process...
ROAD SAFETY BY DETECTING DROWSINESS AND ACCIDENT USING MACHINE LEARNING
Driver Drowsiness Detection Using Matlab
Ad

More from IAESIJAI (20)

PDF
Hybrid model detection and classification of lung cancer
PDF
Adaptive kernel integration in visual geometry group 16 for enhanced classifi...
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Enhancing fall detection and classification using Jarratt‐butterfly optimizat...
PDF
Deep ensemble learning with uncertainty aware prediction ranking for cervical...
PDF
Event detection in soccer matches through audio classification using transfer...
PDF
Detecting road damage utilizing retinaNet and mobileNet models on edge devices
PDF
Optimizing deep learning models from multi-objective perspective via Bayesian...
PDF
Squeeze-excitation half U-Net and synthetic minority oversampling technique o...
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Exploring DenseNet architectures with particle swarm optimization: efficient ...
PDF
A transfer learning-based deep neural network for tomato plant disease classi...
PDF
U-Net for wheel rim contour detection in robotic deburring
PDF
Deep learning-based classifier for geometric dimensioning and tolerancing sym...
PDF
Enhancing fire detection capabilities: Leveraging you only look once for swif...
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Depression detection through transformers-based emotion recognition in multiv...
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Enhancing financial cybersecurity via advanced machine learning: analysis, co...
PDF
Crop classification using object-oriented method and Google Earth Engine
Hybrid model detection and classification of lung cancer
Adaptive kernel integration in visual geometry group 16 for enhanced classifi...
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Enhancing fall detection and classification using Jarratt‐butterfly optimizat...
Deep ensemble learning with uncertainty aware prediction ranking for cervical...
Event detection in soccer matches through audio classification using transfer...
Detecting road damage utilizing retinaNet and mobileNet models on edge devices
Optimizing deep learning models from multi-objective perspective via Bayesian...
Squeeze-excitation half U-Net and synthetic minority oversampling technique o...
A novel scalable deep ensemble learning framework for big data classification...
Exploring DenseNet architectures with particle swarm optimization: efficient ...
A transfer learning-based deep neural network for tomato plant disease classi...
U-Net for wheel rim contour detection in robotic deburring
Deep learning-based classifier for geometric dimensioning and tolerancing sym...
Enhancing fire detection capabilities: Leveraging you only look once for swif...
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Depression detection through transformers-based emotion recognition in multiv...
A comparative analysis of optical character recognition models for extracting...
Enhancing financial cybersecurity via advanced machine learning: analysis, co...
Crop classification using object-oriented method and Google Earth Engine
Ad

Recently uploaded (20)

PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
cuic standard and advanced reporting.pdf
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Cloud computing and distributed systems.
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
Spectroscopy.pptx food analysis technology
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Encapsulation theory and applications.pdf
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Review of recent advances in non-invasive hemoglobin estimation
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
cuic standard and advanced reporting.pdf
The AUB Centre for AI in Media Proposal.docx
Network Security Unit 5.pdf for BCA BBA.
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Empathic Computing: Creating Shared Understanding
Cloud computing and distributed systems.
Chapter 3 Spatial Domain Image Processing.pdf
Spectral efficient network and resource selection model in 5G networks
Dropbox Q2 2025 Financial Results & Investor Presentation
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
20250228 LYD VKU AI Blended-Learning.pptx
Understanding_Digital_Forensics_Presentation.pptx
Spectroscopy.pptx food analysis technology
The Rise and Fall of 3GPP – Time for a Sabbatical?
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Encapsulation theory and applications.pdf
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Review of recent advances in non-invasive hemoglobin estimation

Design and implementation of a driving safety assistant system based on driver behavior

  • 1. IAES International Journal of Artificial Intelligence (IJ-AI) Vol. 13, No. 3, September 2024, pp. 2603~2613 ISSN: 2252-8938, DOI: 10.11591/ijai.v13.i3.pp2603-2613  2603 Journal homepage: http://ijai.iaescore.com Design and implementation of a driving safety assistant system based on driver behavior Adil Salbi1 , Mohamed Amine Gadi1 , Tarik Bouganssa2 , Abdelhadi Eloudrhiri Hassani1 , Abdelali Lasfar1 1 LASTIMI laboratory, High School of Technology of Salé, Mohammed V University of Rabat, Salé, Morocco 2 Smart Systems Laboratory, ENSIAS Mohammed V University of Rabat, Salé, Morocco Article Info ABSTRACT Article history: Received Nov 8, 2023 Revised Feb 19, 2024 Accepted Feb 28, 2024 These days, road accidents are one of Morocco's biggest problems. Fatigue, drowsiness, and driver behavior are among the primary causes. This research aims to develop an embedded system by image processing and computer vision to ensure driving safety by monitoring driver behavior and assist drivers to awaken from micro-sleep or fatigue due to long driving hours and various other reasons. Indeed, the driver inattention, drowsiness or driver fatigue can be detected. The suggested method is designed to support drivers if needed, based on the vehicle velocity. Once the driver crosses a certain speed limit, the program starts face detection and analyzing this data to determine whether the driver is tired, sleepy, or inattentive. This activates different alarm depending on the criticality level. It can sound a voice alert to help him wake up and drive more cautiously. The system is based on artificial intelligence algorithms in image processing based on OpenCV libraries and the Python language to capture the movements of the driver's eyes and head when starting the automobile. Every algorithm is run on a Raspberry-Pi 4 card, and numerous experimentation series have demonstrated overall credible performance with success accuracy of over 93% in eye aspect ratio (EAR) and mouth aspect ratio (MAR) calculations. Keywords: Driver behavior Driving safety Drowsiness Embedded system Image processing OpenCV This is an open access article under the CC BY-SA license. Corresponding Author: Mohamed Amine Gadi LASTIMI laboratory, High School of Technology of Salé, Mohammed V University of Rabat Avenue Prince Héritier, BP 227 Salé Médina, Salé, Morocco Email: gadi.muhamedamine@gmail.com 1. INTRODUCTION In the world, road accidents represent a critical issue, with alarming statistics underscoring the urgent need for effective interventions to enhance road safety. According to recent surveys conducted by the Vinci autoroutes foundation in Morocco, falling asleep at the wheel is the leading cause of mortality on highways, tripling to quadrupling the risk of accidents within thirty minutes after the onset of drowsiness related to fatigue. The severity of the situation is further highlighted by the fact that a vehicle in Morocco is 18.2 times more likely to be involved in a fatal accident than in Sweden and 13.5 times more than in France [1]. Further studies have shown that 20 to 30% of accidents involving professional drivers are related to vigilance disorders, with 39% of highway accidents due to drowsiness and 25% caused by a decrease in alertness due to falling asleep at the wheel. To underscore the gravity of this issue, Figure 1 at the bottom illustrates the comparison between Folkway and Fastway in terms of the primary causes of fatal accidents, derived from a survey executed as part of the 2018 Moroccan barometer of responsible driving project. This comparison elucidates the specific areas requiring urgent attention for intervention.
  • 2.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 3, September 2024: 2603-2613 2604 Figure 1. Primary causes of fatal accidents according to Moroccan drivers Addressing this dire reality, our study introduces a driver assistance safety system based on driver behavior, utilizing image processing and computer vision to detect fatigue and drowsiness. This approach represents a novel and effective solution to monitor driver behavior in real-time, thereby reducing the risk of fatigue-related accidents. The article details the design, implementation, and evaluation of our system, providing a comprehensive overview of the methodology employed, the experiments conducted, and the analysis of the results obtained which success accuracy rate exceeding 97%. Through this study, we aim to make a significant contribution to road safety in Morocco and beyond by offering an innovative technology capable of preventing accidents caused by driver fatigue. To achieve this, we have developed a comprehensive system that integrates advanced image processing algorithms and machine learning techniques to accurately detect early signs of driver fatigue and drowsiness. By deploying real-time monitoring and alert mechanisms, our system provides timely warnings to drivers, potentially saving lives and reducing the incidence of road accidents. This research not only addresses a critical safety issue but also sets a precedent for future studies in the field, paving the way for the development of more sophisticated driver assistance technologies. 2. RELATED WORK To mitigate the precarious situation of road accidents caused by drowsiness, fatigue, or even lack of vigilance, researchers have considered several approaches to detecting these factors. The drowsiness detection landscape comprises two main categories of techniques. The first category focuses on driver performance, monitoring the driver's actions for signs of drowsiness [2], [3]. Steering wheel jolts and the standard deviation of lateral position (SDLP) are employed to detect drowsiness. While steering wheel jolts look for erratic movements, SDLP keeps an eye on the vehicle's lane position. The second category, known as driver state monitoring technique, involves direct monitoring of the driver's physical state. This approach utilizes image-processing methods to analyze factors such as eye closure, yawning, and head nodding. Additionally, it incorporates physiological signal-based methods that track changes in heart rate or brain activity as indicators of drowsiness [4]-[7]. By integrating these techniques, researchers can develop more accurate and reliable drowsiness detection systems for enhanced driver safety. 2.1. Face detection Detecting faces, especially those with visible eyes, is effortless for humans but poses a challenge for machines. Indeed, machine learning plays a crucial role in extracting desired elements from complex environments, taking into account factors such as scale, orientation, and lighting. Moreover, face detection has diverse applications, including monitoring driver fatigue. In this context, researchers, led by Kortli et al. [8], categorized facial detection methods into four groups. Firstly, model-based: this approach compares face and facial part models with candidates using correlation functions. Almabdy and Elrefaei [9] emphasized the importance of eyes but also noted potential accuracy issues and computational costs. Secondly, invariant characteristics: focuses on detecting faces despite variations in pose, lighting, and angles. Thirdly, facial characteristics: this method locates five key facial features to describe a typical face [10], [11]. Lastly, appearance and machine learning: quickly and accurately applies models acquired from a variety of images. Regarding object detection techniques, popular methods include the Viola-Jones algorithm (Haar cascade) and the histogram of oriented gradients (HOG) by Dalal and Triggs [12]. Specifically, face detection in real-time is one area where the Viola-Jones algorithm excels [13]. The process involves four steps: classifier cascade design, AdaBoost training, image creation, and Haar-like feature selection. Notably, due to their ease of use and quick computation time, Viola and Jones employ Haar features, sometimes referred to as Haar-like features. Additionally, to speed up feature calculations, they make use of an idea
  • 3. Int J Artif Intell ISSN: 2252-8938  Design and implementation of a driving safety assistant system based on driver behavior (Adil Salbi) 2605 known as integral images. A significant invention by Viola and Jones is the integral image, which makes feature calculations more efficient by reducing the frequency of summing pixel values. As depicted in Figure 2 illustrates the Haar features, explaining how they are calculated and highlighting their importance. Particularly, Figure 2(a) depicts the Haar-like features employed by the Viola-Jones algorithm for face detection, showcasing simple rectangular patterns that assess contrasts between different image regions, such as the eye area versus the cheek. These patterns use black and white rectangles to represent positive and negative areas, thereby demonstrating the algorithm's capacity to detect faces by contrasting light and dark regions. Simultaneously, Figure 2(b) presents a facial image as a typical input for feature detection. This illustration clarifies the algorithm's objective: to identify invariant facial characteristics despite variations in expression and orientation, thus emphasizing the role of Haar-like features in achieving accurate face detection. However, Figure 2(c) introduces a novel representation, the integral image, conceptualized by Viola-Jones. This integral image, of the same size as the original, contains at each point the sum of the pixels located above and to its left, aiming to streamline the feature calculation process. The idea is to calculate the sum of all pixels in the image just once, meaning the pixel at position (x, y) in the full image is the sum of the pixel values above and to the left of (x, y), significantly enhancing computational efficiency [12], [13]. As shown in Figure 2(c), the integral image at the yellow pixel (20) equals the sum of all blue pixels (1+3+12+4), where “imgintg (x,y)” is the integral image at position (x, y), representing the sum of all pixels in the original image, “imgsource (x',y')”, up to and including (x, y). The general formula is articulated by (1): imgintg (x,y) = ∑ imgsource x'≤x,y'≤𝑦 (x',y') (1) Sequentially applied, cascade classifiers make a pivotal decision: if a sub-region contains the object, the operation is deemed complete and progresses to the next classifier; otherwise, it is rejected. According to Viola and Jones, the vast majority of sub-regions that test negative (non-face/non-eyes) can be promptly dismissed using classifiers at the beginning of the cascade, allowing more focus on positive (face/eyes) inputs. However, the Haar cascade faces several limitations. Firstly, it may struggle to detect faces under challenging lighting conditions, potentially resulting in high false detection rates or inadequate detection rates. Additionally, Haar cascade may find it difficult to detect faces in images where faces are partially obscured, such as those hidden behind glasses or hats. Finally, Haar cascade might face challenges in detecting faces within images exhibiting large angle changes or complex facial expressions [14]. (a) (b) (c) Figure 2. Illustration of Haar's characteristics for computational analysis of feature extraction and summation in image processing: (a) Haar-like features for edge and line detection, (b) Haar feature application on facial features, and (c) source to integral image conversion On another front, the HOG method emerges as a robust technique in computer vision for object detection within images. Proposed by researchers Dalal and Triggs in their seminal article "histograms of oriented gradients for human detection" [15], the HOG method involves dividing the image into small cells and calculating the gradient orientation of every pixel within each cell. Subsequently, a histogram of gradient orientations is constructed for each block of cells, enabling the representation of the entire image by a vector of HOG descriptors. This representation is pivotal for training classifiers, such as support vector machines (SVM), to detect objects in new images. Crucially, converting the original image into a gray intensity image is necessary to streamline the process [12], [15]. The essence of this approach lies in contrasting the current pixel's darkness with its immediate surroundings, thereby delineating the direction towards increasing darkness. This methodical process, albeit seemingly redundant, ensures that both dark and light areas are uniformly represented based on the direction of brightness change.
  • 4.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 3, September 2024: 2603-2613 2606 Furthermore, by analyzing the image in small 16×16-pixel squares, the flow of light/dark becomes discernible at a macro level, revealing the image's fundamental pattern. In each square, the count of gradients pointing in each principal direction is tallied. These squares are then annotated in the image with the directions of dominant arrows. To locate faces within this HOG image, it suffices to identify parts of the image that closely match a known HOG model extracted from training faces. Conversely, the system's ability to capture faces oriented in various directions could misinterpret the absence of a face, and consequently, a driver. To mitigate this limitation, each image is adjusted to maintain the eyes and lips in direct opposition. This adjustment employs an algorithm known as "face landmark estimation," which identifies 68 specific points, or "landmarks," on each face, such as the top of the chin and the edges of each eye. A neural network is then trained to detect these 68 specific points on any face, enhancing the system's accuracy and versatility. 2.2. Eye status detection Eye condition is a critical factor in determining whether a driver is tired, asleep, or in their normal state. In essence, eye detection serves as an essential pillar for analyzing driver behavior. Several factors are considered in this operation, including external noise and the diversity of eye appearance. The concept of appearance diversity encompasses shape, size, color variation, and movement of the iris and eyelid, while external noise factors include glasses, hair, shade, quality of imaging conditions, and environmental conditions. At the heart of this method is the extraction of geometric characteristics of the eye, such as the visible iris and the elliptical shape of the eyelids, which forms the basis for distinguishing whether the eye is open or closed. Furthermore, variations in intensity distribution also facilitate this determination through the presence or absence of the iris and the white region of the eye in an image. Irrespective of the environmental changes, curves representing the differences between opened and closed eyes are generated by the horizontal and vertical projections applied to the eye region. This technique is crucial for determining the degree of eye closure and aligning the detected region. These methods are adept at managing the various changes that eyes can undergo, including the processing of low-quality images, making them more reliable than other methods. The approach consists of two steps: extracting visual characteristics from the photometric aspect of the eyes and their classification. Subsequently, the next section describes the materials and methods used to design and implement a cost-effective automotive electronic embedded system. This system aims to reduce accidents by monitoring drivers in critical situations, such as driving in states of fatigue and drowsiness or driving with inattention on highways. 3. MATERIAL AND METHODS 3.1. Material setup In this project, we required equipment capable of processing video sequences in real-time while being compact enough for easy integration into a vehicle. There are several types of electronic cards able to make image processing, but the Raspberry Pi 4 Model B was chosen for its superior hardware performance, including a 64-bit quad core system-on-chip (SoC) processor at 1.5 GHz, 2 GB of RAM (expandable), and various features such as Wi-Fi, Bluetooth, USB 3.0 ports, and 40-pin GPIO connectors. Its small size (85×56×16 mm) and cost-effectiveness make it an ideal solution. The system is powered by a 5 V source, such as a car's cigarette lighter. Figure 3 shows the functional diagram of the on-board system used for detecting driving fatigue. This diagram illustrates how the Raspberry-Pi 4-B board, at the heart of the system, is connected to the various components, including a camera for driver monitoring and an alert module to report detected fatigue. This configuration demonstrates the integration of image processing and communications technologies to create a proactive safety feature in the vehicle. Figure 3. Functional system view
  • 5. Int J Artif Intell ISSN: 2252-8938  Design and implementation of a driving safety assistant system based on driver behavior (Adil Salbi) 2607 The device captures the driver's face in real-time using a camera as shown in the Figure 3. The Raspberry's microprocessor analyzes fatigue or drowsiness through an algorithm Written using a Python language and library OpenCV tools. When fatigue is detected, a visual alert is triggered, supplemented by an audible alarm if necessary. Advanced SOS measures, such as activating hazard lights for informing other road users or sending an emergency SMS with the vehicle's location, are implemented through the addition of NEO-6M GPS and GSM modules. 3.2. Software setup As previously noted, image processing for face detection can be efficiently executed using the Python language, given the richness of its libraries in this field. A flow diagram, depicted in Figure 4, illustrates the overall operations of the system. Specifically, Figure 4(a) presents a flow diagram describing the system's comprehensive process. This process starts by capturing the driver's face in real time through a camera connected to the Raspberry Pi board. The captured images are then processed by a Python algorithm utilizing OpenCV libraries for detecting face and facial expressions indicative of fatigue or drowsiness, such as closing the eyes or yawning. Furthermore, this diagram elucidates the successive steps of acquiring visual data, analyzing it to identify signs of fatigue, and the immediate actions taken by the system in response to these signs, including activating alarms. Moreover, Figure 4(b) specifically illustrates the alarm triggering mechanism when the system detects the absence of the driver's face or signs of drowsiness. This flowchart delineates how the system evaluates risk conditions based on the analysis of facial images and, upon detecting fatigue, how it initiates emergency measures such as visual and audible alarms to awaken the driver and alert passengers. It also emphasizes the importance of the system's reactivity to signs of fatigue, showcasing the sequence of actions triggered to minimize the risk of an accident and highlighting the system's capacity to actively intervene in the event of potential danger, thereby underlining the preventive nature of the developed system. (a) (b) Figure 4. System algorithms flowchart: (a) overall system process and (b) driver behavior monitoring
  • 6.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 3, September 2024: 2603-2613 2608 A variety of Python tools and libraries are utilized in extracting facial features; such as eye condition. These libraries offer powerful capabilities for processing, manipulating, and analyzing images, as well as for creating machine learning applications in computer vision to extract facial features for detecting sleepiness in drivers. Among these libraries, notable ones include OpenCV, Dlib, Imutils, Scipy.spatial, and Pygame. OpenCV is renowned for its real-time image and video processing capabilities. It finds application in numerous fields, including facial recognition, video surveillance, and machine vision [16]. OpenCV boasts both low and high-level application programming interface (APIs) and facilitates parallel programming. According to its documentation, the library provides various functionalities, such as image manipulation (loading, saving, and copying), video acquisition, image processing, image analysis, and shape recognition. OpenCV is instrumental in capturing real-time video from video cameras, marking the initial data acquisition step. This stage aims to transform video recordings into images, which then form the foundation for subsequent analysis stages. Furthermore, OpenCV supports eye and face detection using the Viola-Jones (Haar cascade) method discussed in the first section. Figures 5(a) to 5(c) demonstrates the outcome of this process. Specifically, Figure 5(b) details the detection and analysis of the eye region to calculate the eye aspect ratio (EAR), a crucial metric for eye closure. Similarly, Figure 5(c) focuses on the mouth region to calculate the mouth aspect ratio (MAR), which is instrumental in detecting yawns, another fatigue indicator. Meanwhile, Figure 5(a) illustrates the facial feature points employed in drowsiness detection, with a focus on the eye region. (a) (b) (c) Figure 5. Detection of key facial features by characteristic points of eye region and mouth region; (a) detection of facial regions: eyes and mouth, (b) characteristic points of the eye region, and (c) characteristic points of the mouth region Dlib stands out with its graphics library that facilitates image processing [17]-[19]. The accuracy provided by Dlib, which employs the HOG method, surpasses that of OpenCV, which relies on the Haar cascade technique. In the design of this project, the Dlib library will be employed to detect faces and their characteristic points, marking the characterization step. This step aims to highlight significant facial expression features while eliminating redundant data. Dlib excels in estimating the location of 68 (x, y) coordinates that map facial points on a person's face, with these points stored in an indexed array. The implementation of the Dlib library specifically aims to identify the characteristic points of the eyes and the mouth to calculate the Euclidean distance, thereby determining the distance between two points. This calculation, when compared to a predetermined threshold, will enable the assessment of whether the driver is asleep, tired, or alert. In Dlib, 6 coordinate points (x, y) represent each eye, and 12 other points represent the mouth, as illustrated in Figure 5 [20]-[22]. It showcases these points that are identified from a pre-trained model [23]-[26], thereby underlining the project's emphasis on precision and effectiveness in monitoring driver alertness. 3.2.1. Eye aspect ratio The EAR serves as a measure to describe and represent the characteristic points of the eyes. This measurement, indicating the ratio between the width and the height of the eye, is pivotal in determining the level of eye openness. The higher this value, the more open the eye is considered. Thus, it significantly aids in characterizing a base of examples (images) to discover a learning model that can predict the state of a new instance. The EAR value is calculated using (2): 𝐸𝐴𝑅 = 𝐴𝑒+𝐵𝑒 2∗𝐶𝑒 (2) Where Ae is the distance between eye points (1) and (5) in the Figure 5(b), Be is the distance between points (2) and (4), and Ce is the distance between points (0) and (3). A specific threshold, termed the ocular
  • 7. Int J Artif Intell ISSN: 2252-8938  Design and implementation of a driving safety assistant system based on driver behavior (Adil Salbi) 2609 threshold (Te), is essential to assess whether the driver is asleep based on the EAR level. If the EAR value is below this threshold, the eye is considered closed. Typically, the threshold is set at Te=0.25. 3.2.2. Mouth aspect ratio Yawning, as an indicator of fatigue due to its role in increasing oxygen intake and reducing carbon dioxide levels in the blood, is similarly measured by the MAR. This ratio elucidates the height to width ratio of the mouth, helping to ascertain whether the mouth is open (indicating yawning) or closed [22]. The MAR value is derived from (3): 𝑀𝐴𝑅 = 𝐴𝑚+𝐵𝑚 2∗𝐶𝑚 (3) Where Am is the distance between mouth points (2) and (10), in the Figure 5(c), Bm between points (4) and (8), and Cm between points (0) and (6). A mouth threshold (Tm) defines whether the mouth is considered open in a yawning state based on the MAR value. If the MAR value exceeds this threshold, the mouth is deemed to be yawning. Generally, this threshold is Tm=0.5. The Dlib library's official documentation showcases its features, including face extraction and feature point analysis. In this research, the Scipy.spatial sub-library is utilized for calculating Euclidean distances, facilitating complex geometric calculations required for measuring the eye-opening ratios EAR and MAR, and for detecting eye blinks. Moreover, the Imutils sub-library, with its "face_utils" function, provides utilities for processing facial features, such as detecting and extracting facial landmarks from images. This technique, in conjunction with the Dlib library, aids in detecting facial landmarks from input videos. Ultimately, using this module simplifies the extraction of facial features, such as key points e.g., eyes, nose, and mouth, from a face image. Additionally, the Pygame library, suitable for playing sounds, employed here to issue an audio alert. 4. RESULTS AND DISCUSSIONS After installing the Raspberry Pi operating system (Raspbian), OpenCV, and the necessary libraries, the system based on the Raspberry Pi 4 board and equipped with various sensors (camera, gyroscope, and GPS) and actuators (LED, speaker, and SOS system), as previously mentioned in section 2 was integrated into a vehicle for testing. These initial tests aimed to refine facial recognition under various conditions. Figure 6 comprehensively illustrates the system onboard implementation and driver behavior detection capabilities. The Figure 6(a) depicts the initial setup of the system in the vehicle, showcasing the Raspberry Pi with its connected components, prepared for real-world testing. Figure 6(b) captures a moment when the system successfully distinguishes the driver from passengers, thereby demonstrating its capability to accurately identify the vehicle's operator. Subsequently, Figures 6(c) and 6(d) display the system assessing the driver's head orientation (left and right). These observations highlight the system's ability to detect potential inattention when the driver's gaze deviates from the road, prompting a visual alert. Figure 6(e) presents the user interface issuing a fatigue alert, based on the EAR calculation, which assesses the frequency of eye blinking to evaluate the driver’s alertness. If this posture persists for more than 3 seconds and the vehicle's speed exceeds a certain threshold (e.g., 60 km/h), a visual and an audible alarm are activated to caution the driver. In Figure 6(f), the system detects drowsiness through the analysis of the MAR, monitoring yawns or other mouth movements indicative of fatigue. Particularly noteworthy is Figure 6(g), which underscores the system's effectiveness under nighttime driving conditions by demonstrating its capability to identify drowsiness even in low-light scenarios, vital for enhancing driving safety during the night. Each of these instances attests to the system's advanced and responsive nature in various driving scenarios, underscoring its potential to augment driver safety by alerting to varying degrees of inattention and fatigue as like in the Figure 7. Figures 7(a) and 7(b) depict a critical situation where the vehicle is in motion, yet the driver's face is not detected, suggesting unconsciousness or severe distraction. In such cases, the system triggers an emergency advanced driver-assistance systems (ADAS) warning, as described previously in section 2. This function is crucial for initiating immediate corrective measures to avert potential accidents due to driver incapacitation. Conversely, Figures 7(c) and 7(d) illustrate situations where the vehicle is stationary, and no face is detected. Here, the absence of detection does not immediately necessitate an emergency alert; instead, the system diligently attempts to locate the driver's face, ensuring readiness for when the vehicle commences movement. This proactive approach by the system highlights its sophistication and its pivotal role in continuously safeguarding driver awareness and safety. Each scenario in Figure 7 emphasizes the system’s adaptability to various conditions and its essential role in monitoring driver consciousness to improve road
  • 8.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 3, September 2024: 2603-2613 2610 safety. It is also crucial to note that the camera is capable of capturing faces within a 140-degree angle, significantly ensuring that the driver's face remains within the field of view under most circumstances, thus allowing for reliable monitoring and prompt system responses to various driver states. In Table 1, we detail various scenarios and describe how the final system responds by generating alerts for different driver states. (a) (b) (c) (d) (e) (f) (g) Figure 6. Implementation and results of different driver behavior detection: (a) driver monitoring setup with Raspberry Pi and camera, (b) normal driving detected, (c) driver looking to the right, (d) driver looking to the left, (e) yawning detected, (f) mouth open, possible drowsiness detected, and (g) night driving detected (a) (b) (c) (d) Figure 7. Other situation of driving behavior detection: (a) in-motion no face detected, triggering emergency ADAS warning, (b) moving driver unresponsive, activating emergency ADAS alert, (c) stationary no face detected, no alarm, and (d) stopped driver absent, no alarm Table 1. Various alertness level for different drowsiness degrees EAR test MAR test Face direction test Alert type True True True No alert True False True Tiredness alert False True True Cautionary alert True True False Warning alert True False False Warning alert False True False Warning alert False False True Warning alert False False False Emergency alert Notably, the system categorizes drowsiness into four distinct levels of alert, each corresponding to a different degree of drowsiness severity: fatigue alert, cautionary alert, warning alert, and emergency alert.
  • 9. Int J Artif Intell ISSN: 2252-8938  Design and implementation of a driving safety assistant system based on driver behavior (Adil Salbi) 2611 These categories help in tailoring the system’s response to the specific needs posed by the driver's condition, ensuring both safety and appropriateness of the alert issued. The alert levels are defined as follows:  Fatigue alert: this level is triggered when the system detects signs of mild drowsiness in the driver. It serves as an early indication that the driver may need to take a rest or a break, helping to prevent the escalation of fatigue to more dangerous levels.  Cautionary alert: it is activated at a moderate level of drowsiness. It indicates that the risks associated with continued driving while fatigued are increasing and that the driver should consider stopping.  Warning alert: it is issued when the system identifies severe drowsiness, this alert level represents a heightened danger to safety. It serves as a pronounced caution to the driver, urging them to take immediate action to have a rest before continuing to drive.  Emergency alert: this is the most critical level of alert; it is generated in cases of extreme drowsiness or unconsciousness. The system's emergency response could involve activating additional safety measures or alerts to prompt immediate driver or passenger intervention. The structured approach to categorizing driver drowsiness and issuing corresponding alerts plays a pivotal role in enhancing road safety. By precisely assessing the severity of the driver's condition and responding with a tailored alert level, the system endeavors to reduce the risks associated with drowsy driving, thereby potentially saving lives, and preventing accidents. This sophisticated response mechanism in Table 1 facilitates a more effective and context-aware strategy for managing driver alertness, ultimately contributing to the safety of all road users. For each distinct scenario, the algorithm evaluates the driver's drowsiness based on tests like EAR, MAR, and gaze direction, subsequently determining the most suitable alert level to issue. Despite its promising outcomes, our drowsiness detection system encounters certain limitations. Primarily, the accuracy of detection is significantly influenced by the vehicle's interior lighting conditions. In dimly lit environments, standard cameras might not accurately capture the driver's eyes, potentially leading to incorrect assessments. Therefore, the use of a camera equipped with a night vision mode is advised to overcome this challenge. Additionally, the system exhibits noticeable delays in processing the algorithms, particularly when integrating other SOS alert functionalities. Given that the Python algorithms operate within the Raspbian operating system on the Raspberry Pi board, this delay is largely attributed to the extensive sequential processing demanded by the board's microprocessor. Addressing this issue could involve opting for a board with superior specifications, such as the Raspberry Pi 5, or transitioning to parallel processing technologies like field programmable gate array (FPGA). FPGA technology marks a novel research avenue in image processing due to its capability to execute algorithms in a hardware description format with parallel processing capabilities [27]. Such advancements are expected to significantly enhance real-time performance within the ADAS framework, thereby offering a more responsive and efficient system for monitoring driver alertness and ensuring road safety. 5. CONCLUSION The ADAS systems using artificial intelligence represent a significant advance in the field of road safety and public health. The results of this study demonstrate the effectiveness of algorithm models integrated into an embedded system such as the Raspberry Pi card to ensure continuous monitoring of driver behavior and precise recognition of signs of drowsiness, thus offering promising prospects for the development early detection systems. These technological advances could help reduce accidents related to fatigue while driving, thereby improving the safety of drivers and passengers. However, challenges remain, particularly regarding the adaptability of the models to various driving contexts and the handling of false positives. Future research should focus on continued algorithm optimization, real-time data integration, and large-scale validation across diverse populations. In conclusion, the use of artificial intelligence for driver behavior monitoring has considerable potential to improve road safety. However, it is imperative to continue research and development efforts in order to guarantee the reliability, precision, and generalization of these systems in varied conditions, thus contributing to creating a safer road environment and preventing accidents linked to traffic, fatigue or inattention while driving. REFERENCES [1] H. Khyara, A. Amine, and B. Nassih, “Road traffic accidents in Morocco: exploratory analysis of driver, vehicle, and pedestrian factors,” SN Computer Science, vol. 4, no. 2, 2023, doi: 10.1007/s42979-022-01501-6. [2] G. Sikander and S. Anwar, “Driver fatigue detection systems: a review,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 6, pp. 2339–2352, Jun. 2019, doi: 10.1109/TITS.2018.2868499. [3] M. A. Abu, I. D. Ishak, H. Basarudin, A. F. Ramli, and M. I. Shapiai, “Fatigue and drows-iness detection system using artificial intelligence technique for car drivers,” Advanced Structured Materials, vol. 167, pp. 421–430, 2022, doi: 10.1007/978-3-030- 89988-2_31.
  • 10.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 3, September 2024: 2603-2613 2612 [4] M. Alnaggar, A. I. Siam, M. Handosa, T. Medhat, and M. Z. Rashad, “Video-based real-time monitoring for heart rate and respiration rate,” Expert Systems with Applications, vol. 225, Sep. 2023, doi: 10.1016/j.eswa.2023.120135. [5] R. P. Balandong, R. F. Ahmad, M. N. M. Saad, and A. S. Malik, “A review on EEG-based automatic sleepiness detection systems for driver,” IEEE Access, vol. 6, pp. 22908–22919, 2018, doi: 10.1109/ACCESS.2018.2811723. [6] L. Jin, Q. Niu, Y. Jiang, H. Xian, Y. Qin, and M. Xu, “Driver sleepiness detection system based on eye movements variables,” Advances in Mechanical Engineering, vol. 2013, 2013, doi: 10.1155/2013/648431. [7] N. Alioua, A. Amine, A. Rogozan, A. Bensrhair, and M. Rziza, “Driver head pose estimation using efficient descriptor fusion,” Eurasip Journal on Image and Video Processing, vol. 2016, no. 1, 2016, doi: 10.1186/s13640-016-0103-z. [8] Y. Kortli, M. Jridi, A. A. Falou, and M. Atri, “Face recognition systems: A survey,” Sensors, vol. 20, no. 2, Jan. 2020, doi: 10.3390/s20020342. [9] S. Almabdy and L. Elrefaei, “Deep convolutional neural network-based approaches for face recognition,” Applied Sciences, vol. 9, no. 20, Oct. 2019, doi: 10.3390/app9204397. [10] S. Saleem, J. Shiney, B. P. Shan, and V. K. Mishra, “Face recognition using facial features,” Materials Today: Proceedings, vol. 80, pp. 3857–3862, 2023, doi: 10.1016/j.matpr.2021.07.402. [11] M. K. Hasan, M. S. Ahsan, A.-A. -Mamun, S. H. S. Newaz, and G. M. Lee, “Human face detection techniques: A comprehensive review and future research directions,” Electronics, vol. 10, no. 19, Sep. 2021, doi: 10.3390/electronics10192354. [12] C. Rahmad, R. A. Asmara, D. R. H. Putra, I. Dharma, H. Darmono, and I. Muhiqqin, “Comparison of Viola-Jones Haar cascade classifier and histogram of oriented gradients (HOG) for face detection,” in IOP Conference Series: Materials Science and Engineering, Jan. 2020, doi: 10.1088/1757-899X/732/1/012038. [13] A. A. Elngar, M. Arafa, A. E. R. A. Naeem, A. R. Essa, and Z. A. Shaaban, “The Viola-Jones face detection algorithm analysis: A survey,” Journal of Cybersecurity and Information Management, pp. 85–95, 2021, doi: 10.54216/JCIM.060201. [14] M. Rezaei and R. Klette, “Adaptive Haar-like classifier for eye status detection under non-ideal lighting conditions,” in Proceedings of the 27th Conference on Image and Vision Computing New Zealand, New York, NY, USA: ACM, Nov. 2012, pp. 521–526. doi: 10.1145/2425836.2425934. [15] C. I. Patel, D. Labana, S. Pandya, K. Modi, H. Ghayvat, and M. Awais, “Histogram of oriented gradient-based fusion of features for human action recognition in action video sequences,” Sensors, vol. 20, no. 24, Dec. 2020, doi: 10.3390/s20247299. [16] M. Khan, S. Chakraborty, R. Astya, and S. Khepra, “Face detection and recognition using OpenCV,” in 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), IEEE, Oct. 2019, pp. 116–119. doi: 10.1109/ICCCIS48478.2019.8974493. [17] D. E. King, “Dlib-ml: a machine learning toolkit,” Journal of Machine Learning Research, vol. 10, pp. 1755–1758, 2009. [18] I. Rakhmatulin and A. T. Duchowski, “Deep neural networks for low-cost eye tracking,” Procedia Computer Science, vol. 176, pp. 685–694, 2020, doi: 10.1016/j.procs.2020.09.041. [19] J. H. Kim, B. G. Kim, P. P. Roy, and D. M. Jeong, “Efficient facial expression recognition algorithm based on hierarchical deep neural network structure,” IEEE Access, vol. 7, pp. 41273–41285, 2019, doi: 10.1109/ACCESS.2019.2907327. [20] M. A. N. Reza, E. A. Z. Hamidi, N. Ismail, M. R. Effendi, E. Mulyana, and W. Shalannanda, “Design a landmark facial-based drowsiness detection using Dlib and OpenCV for four-wheeled vehicle drivers,” in Proceeding of 15th International Conference on Telecommunication Systems, Services, and Applications, TSSA 2021, 2021, pp. 1–5. doi: 10.1109/TSSA52866.2021.9768278. [21] A. N. Younis and F. M. Ramo, “Developing Viola Jones’ algorithm for detecting and tracking a human face in video file,” IAES International Journal of Artificial Intelligence (IJ-AI), vol. 12, no. 4, pp. 1603–1610, Dec. 2023, doi: 10.11591/ijai.v12.i4.pp1603-1610. [22] S. Mohanty, S. V. Hegde, S. Prasad, and J. Manikandan, “Design of real-time drowsiness detection system using dlib,” in 2019 5th IEEE International WIE Conference on Electrical and Computer Engineering, WIECON-ECE 2019 - Proceedings, IEEE, Nov. 2019, pp. 1–4. doi: 10.1109/WIECON-ECE48653.2019.9019910. [23] C. Dewi, R.-C. Chen, X. Jiang, and H. Yu, “Adjusting eye aspect ratio for strong eye blink detection based on facial landmarks,” PeerJ Computer Science, vol. 8, p. e943, Apr. 2022, doi: 10.7717/peerj-cs.943. [24] A. B. Shetty, Bhoomika, Deeksha, J. Rebeiro, and Ramyashree, “Facial recognition using Haar cascade and LBP classifiers,” Global Transitions Proceedings, vol. 2, no. 2, pp. 330–335, Nov. 2021, doi: 10.1016/j.gltp.2021.08.044. [25] H. Benradi, A. Chater, and A. Lasfar, “A hybrid approach for face recognition using a convolutional neural network combined with feature extraction techniques,” IAES International Journal of Artificial Intelligence (IJ-AI), vol. 12, no. 2, pp. 627–640, Jun. 2023, doi: 10.11591/ijai.v12.i2.pp627-640. [26] T. Faisal, I. Negassi, G. Goitom, M. Yassin, A. Bashir, and M. Awawdeh, “Systematic development of real-time driver drowsiness detection system using deep learning,” IAES International Journal of Artificial Intelligence (IJ-AI), vol. 11, no. 1, pp. 148–160, Mar. 2022, doi: 10.11591/ijai.v11.i1.pp148-160. [27] I. Bouganssa, M. Sbihi, and M. Zaim, “Laplacian edge detection algorithm for road signal images and FPGA implementation,” International Journal of Machine Learning and Computing, vol. 9, no. 1, pp. 57–61, 2019, doi: 10.18178/ijmlc.2019.9.1.765. BIOGRAPHIES OF AUTHORS Adil Salbi was born in Errachidia, Morocco. He defended his thesis in 2017 in electrical engineering and embedded systems. He is an assistant professor at the Higher School of Technology of Salé (ESTS) at Mohammed V University Morocco and a member of the LASTIMI laboratory (EMI). His research interests include embedded systems and industrial computing as well as the concept of connected objects (IoT) and artificial intelligence. He can be contacted at email: adilsalbi00@gmail.com.
  • 11. Int J Artif Intell ISSN: 2252-8938  Design and implementation of a driving safety assistant system based on driver behavior (Adil Salbi) 2613 Mohamed Amine Gadi holds both Engineer and Master degrees in computer science and AI since 2021, and now he is Ph.D. student at LASTIMI laboratory in Mohammed V University of Rabat, Morocco. His research focuses on artifiacial inteligence applied on biomedcal and automotive systems. He can be contacted at email: gadi.muhamedamine@gmail.com. Tarik Bouganssa holds a Master's degree in AI from ENSIAS since 2022, and now he is a Ph.D. student in Smart Systems Laboratory at ENSIAS, Mohammed V University of Rabat in Morocco. He can be contacted at email: tarik.bouganssa@gmail.com. Abdelhadi Eloudrhiri Hassani is a Professor at the Higher School of Technology of Salé (ESTS) at Mohammed V University in Rabat since 2023. His research interests include embedded systems, networking and communications protocoles in the industrial connected objects, and artificial intelligence. He can be contacted at email: eloudrhiri.abdelhadi@gmail.com. Abdelali Lasfar is a Professor of Higher Education at Mohammed V Agdal University, Salé Higher School of Technology, Morocco. His research focuses on compression methods, indexing by image content and image processing at the LASTIMI Laboratory. He can be contacted at email: abdelali.lasfar@est.um5.ac.ma.