SlideShare a Scribd company logo
IAES International Journal of Artificial Intelligence (IJ-AI)
Vol. 13, No. 1, March 2024, pp. 1104~1111
ISSN: 2252-8938, DOI: 10.11591/ijai.v13.i1.pp1104-1111  1104
Journal homepage: http://ijai.iaescore.com
Embedded artificial intelligence system using deep learning and
raspberrypi for the detection and classification of melanoma
Yousra Dahdouh, Abdelhakim Boudhir Anouar, Mohamed Ben Ahmed
SSET Research Team, C3S Laboratory, FSTT, Abdelmalek Essaâdi University, Tétouan, Morocco
Article Info ABSTRACT
Article history:
Received Jul 9, 2023
Revised Oct 15, 2023
Accepted Nov 13, 2023
Melanoma is a kind of skin cancer that originates in melanocytes responsible
for producing melanin, it can be a severe and potentially deadly form of
cancer because it can metastasize to other regions of the body if not detected
and treated early. To facilitate this process, Recently, various computer-
assisted low-cost, reliable, and accurate diagnostic systems have been
proposed based on artificial intelligence (AI) algorithms, particularly deep
learning techniques. This work proposed an innovative and intelligent
system that combines the internet of things (IoT) with a Raspberry Pi
connected to a camera and a deep learning model based on the deep
convolutional neural network (CNN) algorithm for real-time detection and
classification of melanoma cancer lesions. The key stages of our model
before serializing to the Raspberry Pi: Firstly, the preprocessing part
contains data cleaning, data transformation (normalization), and data
augmentation to reduce overfitting when training. Then, the deep CNN
algorithm is used to extract the features part. Finally, the classification part
with applied Sigmoid Activation Function. The experimental results indicate
the efficiency of our proposed classification system as we achieved an
accuracy rate of 92%, a precision of 91%, a sensitivity of 91%, and an area
under the curve- receiver operating characteristics (AUC-ROC) of 0.9133.
Keywords:
Convolutional neural network
Deep learning
Dermoscopy images
Internet of things
Melanoma
Raspberry Pi
Skin cancer
This is an open access article under the CC BY-SA license.
Corresponding Author:
Yousra Dahdouh
SSET Research Team, C3S Laboratory, FSTT, Abdelmalek Essaâdi University
Tétouan, Morocco
Email: dahdouhyousra@gmail.com
1. INTRODUCTION
Currently, cancer remains a significant global health challenge [1], [2], especially skin cancer [3],
which is considered a leading cause of death in many cases if not detected early and diagnosed correctly. To
ensure the success of the diagnosis process, oncologists in general must possess a comprehensive knowledge
of the skin's primary layers, including the epidermis, dermis, and subcutaneous fat [4]. When abnormal
growth occurs and irregular in the uppermost skin layer, it can give rise to mutations that form tumors, which
may be either benign or malignant [5]. Melanoma is considered among the most harmful and deadly kinds of
skin cancer that often looks like moles. It’s a dangerous, and deadly variety it grows rapidly and can
influence and spread in all areas of the human body [6]. Diagnosing it in its early stages is vitally important,
as it can contribute to minimizing the risk factor in patients, improving the prognosis of malignant melanoma,
and treating it more easily [7], [8].
Dermoscopy is widely recognized as a common technique for the detection of skin lesions [9].
Nevertheless, the automatic detection of these tumors from dermoscopy images remains a major challenge for
public health [10] due to the substantial similarities between melanoma and non-melanoma lesions. Consequently,
there is a great demand for the scientific research community's development of new and innovative technologies to
Int J Artif Intell ISSN: 2252-8938 
Embedded artificial intelligence system using deep learning and raspberrypi for the … (Yousra Dahdouh)
1105
automatically analyze dermatoscopy images and assist oncologists [11], [12]. It is considered the most effective
approach for early detection and involves promoting and facilitating more efficient skin monitoring systems
through the utilization of advanced internet of things (IoT) techniques [13] and artificial intelligence algorithms
[14] to develop automated systems to aid doctors for early and real-time detection.
Computer vision is a field of artificial intelligence that focuses on enabling computers to interpret and
understand visual information. In this area, the detection and classification of objects are crucial tasks, serving as
fundamental building blocks for a wide range of applications. Due to the rise in the capabilities of computing
facilities, deep learning [15] is revolutionizing computer vision and contains a set of algorithms based on the
human brain's structure such as the neural networks. Deep learning techniques have several advantages for this
have emerged as a powerful tool in image segmentation and classification tasks principally in the medical domain
of the early diagnosis of melanoma [16]–[18]. We cite, for example the convolutional neural networks (CNN)
[19]–[21], are inspired by the human visual cortex used especially in image identification and classification.
This research primarily concentrates on the automated detection of melanoma cancer by combining
advanced technologies: The IoT and deep learning. To achieve this objective, we have proposed an
intelligent and embedded system with a Raspberry Pi designed for the real-time identification and
classification of skin lesions captured using an embedded Pi camera and processed by a deep convolutional
neural network model as containing an important number and varying types of layers including convolutional
layers, pooling layers, and dense layers were meticulously trained using a dataset collected by the
international skin imaging collaboration (ISIC) archive, which comprises two distinct classes: benign and
malignant. The model is implemented in python using keras and tensorflow frameworks and was created
firstly on a computer and after serialized to Raspberry Pi for real-time execution. The main steps of our
proposed deep learning model before serializing to the Raspberry Pi are: 1) Passing the input dermoscopy
images to the preprocessing step: data cleaning, data transformation (Normalization), and data augmentation
to reduce overfitting when training; 2) Extract the features by applying the deep CNN algorithm; and 3) The
classification part with the use of a Sigmoid Activation Function. Our experiment results show an accuracy
rate of 92%, a precision of 91%, a sensitivity of 91%, and an area under the curve- receiver operating
characteristics (AUC-ROC) of 0.9133. We hope our proposed system will help to diagnose, detect, and
classify this melanoma cancer more efficiently way than other methods that have already been used. The
major contributions of this study include: 1) A novel automated system is suggested for the classification of
skin cancer in real-time, utilizing a combination of an Internet of Things device (Raspberry Pi) attached to a
camera and an artificial intelligence model precise and optimized (deep CNN); 2) We improved the model
efficiently with an increase in the dermoscopy images using augmentation techniques; 3) The architecture
provides high accuracy (92%) with the best performance metrics obtained using a deep algorithm containing
many layers with different parameters and a Sigmoid function for classification.
The rest of the article is organized as follows: In section 2 We presented the proposed method with a
brief overview of the dataset, hardware, and software system used in this paper. Section 3 contains the
experiment results and evaluation with a little discussion. Finally, section 4 describes the conclusion,
followed by the references.
2. METHOD
2.1. Dataset
To implement our CNN model, we need the data for training. In this research, a skin cancer dataset [22]
consisting of a balanced number of dermoscopic images of benign skin cancer moles and malignant ones was
compiled from the ISIC archive. The dataset is segmented into 2,637 images in the training set and 660 images in
the test. After collecting the dataset, we preprocessed all images by sizes of 128×128 pixels to obtain the same size
as all the images. The dataset details are illustrated in Table 1.
Table 1. Dataset description
Class Benign/malignant
Number of images class (benign) 1,800
Number of images class (malignant) 1,497
Total number of images 3.297
Dimension after preprocessing 128×128 pixels
2.2. Hardware system
The hardware used for this research is the Raspberry Pi, a miniature personal computer,
approximately the size of a credit card, introduced first time in the year 2012 [23], with an SD Card as a hard
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 1, March 2024: 1104-1111
1106
disk to stock the features of our model. The camera module is connected to the camera port of the Raspberry
Pi for capturing an image of the lesion and a power cable for the power supply, as well as, a keyboard, HDMI
display, and LCD display to view the result of our model.
2.3. Software design
The software in this research consists firstly of preparing the operating system we have used
Raspbian which is optimized for Raspberry Pi hardware and is based on debian linux. secondly, to build a
detection skin cancer system using a Raspberry Pi, supporting software is needed with jupyter notebook to
write the scripts in python, one of the commonly languages used for deep learning tasks with open-source
frameworks: tensorflow. we have used the open-source computer vision library OpenCV which includes a
large selection of algorithms that can help us to apply our model. Other libraries are used like scikit-learn.
About the skin cancer model detection and classification system that has been trained on a laptop
and serialized after to the Raspberry Pi, we have applied deep learning [24] is a part of the family of artificial
intelligence [25] uses a multilayer approach; these layers are connected to extract features from the source
data [26]. Every artificial neural network has layers, the higher the number of this layers, the deeper and
more powerful the network. Recently, deep learning has been used frequently in various areas of computer
vision including image classification, voice recognition, object detection, semantic segmentation [27]–[29],
and other tasks [30] and achieved the best results in detection and classification tasks, particularly for the
classification of medical images. For these reasons, we used it in this study specifically convolutional neural
networks, which are a kind of neural network applied in deep learning mostly to analyze image or visual data
[31]. The CNN was first proposed by LeCun et al. [32]. The CNN architecture has multiple layers [33], (see
details in section 2.4 about our deep CNN proposed).
2.4. Proposed methodology
The principal goal of this study is the production of an automatic system-friendly, lightweight, and
simplified setup with low hardware resources for reduced cost and energy and facilitated connection, that will
be applied in all aspects of healthcare specifically for the detection and classification of skin diseases. The
main idea of our work is to implement the basic embedded system to aid doctors in melanoma cancer
detection and make better medical decisions in real time. This system proposed applied an innovative
combination: artificial intelligence specifically the deep learning algorithm and the IoT devices. This
combination is a state-of-the-art and attractive solution for building a melanoma skin cancer classification
system. The novelty of this research is evident in:
− Among the first research that as far as i know used an approach that combined artificial intelligence (AI)
and the IoT to implement a complete, intelligent, and automatic melanoma classification system.
− Developing an efficient deep learning model with multiple layers deployed on an IoT device Raspberry
Pi for skin cancer classification into 'melanoma' and 'benign' classes.
− Since the ISIC archive is one of the most extensive open source databases accessible, this paper
addresses evaluating the performance of dermoscopic images with our model CNN.
The process of our methodology goes through three main steps each step consists of several tasks:
artificial intelligence part, Raspberry Pi part, and deployment of the model on Raspberry Pi. In this section,
we will present the key elements of our AI model for detecting and classifying skin cancer using dermoscopy
images. We have built a model using the CNN algorithm with various layers for extracting the maximum of
features and tried a collection of activation functions in the classification layer to get the best accuracy. As
illustrated in Figure 1, our proposed model contains a preprocessing step including resizing images, data
cleaning, data transformation (normalization), and data augmentation to increase accuracy. Additionally, we
employed the deep CNN algorithm to extract the features of the lesion. After that, three activation functions
(softmax, sigmoid, switch) have been tried to classify the image into: benign and melanoma until we found
the best result. In the end, we have trained our model with optimal hyper-parameters.
Figure 1. The architecture of the proposed method
Int J Artif Intell ISSN: 2252-8938 
Embedded artificial intelligence system using deep learning and raspberrypi for the … (Yousra Dahdouh)
1107
The proposed methodology is illustrated in Figure 1 and has been described in detail below:
a. Input image: Our model is trained by using a dataset contains of 3,297 images separated into two classes.
b. Preprocessing: After loading and reading the data all images were resized to 128×128 pixels to make
the computation and training faster. The major steps of the preprocessing are image improvement and
noise removal. we have used a median filter to reduce the amount of intensity variation between one
pixel and the other pixel. Islam et al. [34] then, we normalized the pixel values to a [0,1] range to
reduce the space of variation of the values of a feature. Finally, we apply the Train_Test_Split technique
to randomly split data. Where 80% are used for training, and 20% are used for testing.
c. Data augmentation: is an AI technique for generating new data from existing data this technique is used
to train a model properly, increase the classifier’s efficiency and the accuracy, also to addressing class
imbalance problems, reducing overfitting problems, and improving convergence [35]. In the absence of
a large number of datasets, and since we are working with a finite quantity of data to train our CNN
model we have used the augmentation techniques to augment our data using the Keras library with the
ImageDataGenerator function. We applied: Rotation_range=30, Width_shift_range=0.1,
Height_shift_range=0.1, Horizontal_flip=True, Vertical_flip=True, Zoom_range=0.1.
d. Feature extraction & classification: After the preprocessing step, we used the deep CNN algorithm to
extract features and classify the skin cancer. About the feature extraction task, the first part of the
algorithm is used to split the image's points into several subsets such as area, points [36] our proposed
system uses the deep CNN network to extract the features and classify them. This network is trained
from scratch to learn the optimal weights of the network. The second part of our CNN model aims to
perform classification among two types of skin lesions with applied different kinds of activation
functions in the dense layer as classifiers including softmax, sigmoid, and swish. In the proposed
method a CNN model from scratch was applied. The scratch model provides good performance and
accuracy. We have used the multiple convolution layers to detect more complex features, each of the
convolution layers when fed with an image will produce many activation maps, which emphasize
important image features. After, the output of the initial layer is given to the next layer as input, where
complex features are extracted. A batch normalization and a max-pooling layers were used to prevent
initial random weight bias. We applied the filters with different parameters. Finally, we combined these
features to make the classification. After that, there are four fully-connected layers used for
classification by testing numerous activation functions: the softmax, sigmoid, and swish functions. The
proposed deep CNN model illustrated in Figure 2.
Figure 2. Proposed deep CNN model architecture
3. RESULTS AND DISCUSSION
3.1. Experiments and results
For measuring our architecture’s performance was evaluated and validated using several metrics:
Accuracy, Recall/ Sensitivity, Precision, F1 Score [37]. Also, the performance is assessed for this binary
classification problem using the AUC and ROC-AUC metrics [38]. The receiver operating characteristic
(ROC) curve is a graphical representation that plots the sensitivity against the specificity at various
classification thresholds. The AUC is the measure of the performance of a binary classification model, often
in the context of a ROC curve.
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 1, March 2024: 1104-1111
1108
We trained our model with used the google colab platform graphics processing unit (GPU). The
model was implemented on the TensorFlow framework with open-source Keras packages. For training, we
used the technique Train_Test_Split to randomly split data. And, we compiled our model many times with
several hyperparameters such as ADAM, RMSprop, and stochastic gradient descent, Nadam as optimizer
with a learning rate of 0.001, 0.0001, 0.00001, the batch size was 32, 64, 128, and epohs of 50, 100, and 150
with ''binary_crossentropy'' as the loss function until we find a good result. Figure 3(a) illustrates the training
and test accuracy of our model, while Figure 3(b) describes the training and test loss. Following the
completion of training, we observed a slight overfitting pattern, which can be attributed to the characteristics
of the dataset used. The following Table 2 shows the results obtained from the model proposed.
(a) (b)
Figure 3. The test resulf of the proposed method on: (a) accuracy, and (b) loss graph
Table 2. Results of model proposed
Algorithm Optimizer LR F1 Score Precision Accuracy
DeepCNN Adam 0.01 0.8944 0.8934 0.8958
RMSProp
SGD
0.0001
0.00001
0.8808
0.9076
0.8802
0.9073
0.8826
0.91
Adam 0.00001 0.9129 0.9125 0.92
After multiple and several times of tunings, we achieved the accuracy of 92% with the following
hyperparameters: Optimizer: Adam, Learning rate: 0.00001, Dropout: 0.5, Batch size: 32, Epochs: 100. The
idea is to decide lesion cancer by using the embedded system, for this, the classification model obtained was
serialized and copied to the Raspberry Pi. Then, these results show the effective power of utilizing a deep
learning model, especially deep CNN Integrate in IOT systems. Figure 4 presents the confusion matrix of our
model, the size of our confusion matrix 2×2. The accuracy of classification is 92% and the AUC value
obtained is 0.9133 (shown in Figure 5). In comparison to models found in other research as illustrated in
Table 3 (details available in section 3.2), our proposed method has the best accuracy metrics due to our great
and efficient choice of the number of layers and convolutional filters and pooling layers to extract sufficient
features, and also the delicate chooses the right activation function as a classifier to obtain in end highly-
accurate skin lesion classifier.
Figure 4. Confusion matrix Figure 5. The ROC curve of our model
Int J Artif Intell ISSN: 2252-8938 
Embedded artificial intelligence system using deep learning and raspberrypi for the … (Yousra Dahdouh)
1109
Table 3. Comparison with the existing work
Reference Model Accuracy (%)
[39] CNN with 9 layers 80.52
[40] Modified GoogleNet 81
[41] CNN+SVM 89.52
[42] CNN 87.82
[43] MobilenetV2-LSTM 90.72
[44] DenseNet-EfficientNet 85.80
Proposed Deep CNN 92
3.2. Discussion
There is a several works published in the context of melanoma diagnosis using different techniques
among them: In this article, the advanced technologies based on the IoT and deep learning are proposed. This
deep learning architecture proposed is based on the deep CNN algorithm implemented from scratch with an
important number and varying types of layers (see details in section 2.4). After the training step, this model is
serialized to the Raspberry Pi to facilitate diagnosis of skin cancer in real-time. we achieved an accuracy of
92%. We compared the results that we obtained with those obtained by other authors using other techniques.
Jianu et al. [39] proposed a system for classification melanoma using cnn algorithm with great architecture
containing nine layers. Two steps are important: Preprocessing and CNN Architecture. This model gives a
robust result of 80.52%. Kassem et al. [40] for classification skin lesions the authors presented a GoogleNet
model with some modification at the level of filters and layers (add and replace layer by another layer). Their
proposed model gives 81% accuracy compared with the original GoogleNet which achieved 63% accuracy.
Haghighi et al. [41] a system for melanoma recognition is proposed based on fusion with CNN and support
vector machine (SVM). The CNN architecture is used to extract features and a SVM is used as a classifier. In
this study, the author first applied data augmentation. Secondly, extract features with CNN architecture.
After that, a fully connected layer, a dropout layer, and a rectified linear unit (ReLU) layer are
added. Finally, the classification stage used the SVM classifier. The authors obtained an accuracy of 89.52%.
Guarnizo et al. [42] proposed a model for diagnosis skin cancer using the deep learning algorithm CNN. This
CNN model contains a lot of hidden layers. In this work, the authors applied the rectified linear activation
function (ReLU) to every convolutional layer. And about the classification part, they used the activation
function sigmoid, they found an average accuracy of 87.82 %. Srinivasu et al. [43] the authors proposed an
efficient model that can work on lightweight computational devices based on Mobile Net V2 and long short-
term memory (LSTM) to classify skin disease. They obtained an accuracy of more than 85%.
Huang et al. [44] the authors presented a deep learning method for the classification of skin cancer based
with Dense Net and Efficient Net. They obtained an accuracy of 85.8%.
4. CONCLUSION
Skin diseases have become increasingly prevalent in many regions. This paper aims to create an
intelligent system based on the combination of the Internet of Things and deep learning algorithms for the
real-time diagnosis of melanoma lesions where it was applied the data augmentation method for increasing
the number of images, reducing the overfitting problem, and improving convergence. We have found the best
result with an accuracy of 92% and can refine the system's performance by expanding the dataset size and
exploring advanced preprocessing techniques. Through comparing this study with previous empirical studies,
(see section discussion), our findings suggest that our model represents one of the most effective solutions for
decision-making quickly in the healthcare sector, particularly in medical image applications, such as the
diagnosis of skin cancer. Furthermore, there is room for further enhancement by incorporating more specific
and meticulously curated datasets and developing new layers in deep learning algorithms.
REFERENCES
[1] R. L. Siegel, K. D. Miller, H. E. Fuchs, and A. Jemal, “Cancer statistics, 2022,” CA: A Cancer Journal for Clinicians, vol. 72, no.
1, pp. 7–33, Jan. 2022, doi: 10.3322/caac.21708.
[2] W. Li, A. N. Joseph Raj, T. Tjahjadi, and Z. Zhuang, “Digital hair removal by deep learning for skin lesion segmentation,”
Pattern Recognition, vol. 117, p. 107994, Sep. 2021, doi: 10.1016/j.patcog.2021.107994.
[3] H. Sung et al., “Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in
185 Countries,” CA: A Cancer Journal for Clinicians, vol. 71, no. 3, pp. 209–249, Feb. 2021, doi: 10.3322/caac.21660.
[4] R. Gordon, “Skin Cancer: An Overview of Epidemiology and Risk Factors,” Seminars in Oncology Nursing, vol. 29, no. 3, pp.
160–169, Aug. 2013, doi: 10.1016/j.soncn.2013.06.002.
[5] G. N. K. Babu and V. J. Peter, “Skin cancer detection using support vector machine with histogram of oriented gradients
features,” ICTACT Journal on Soft Computing, vol. 11, no. 2, pp. 2301–2305, 2021, doi: 10.21917/ijsc.2021.0329.
[6] M. Q. Khan et al., “Classification of Melanoma and Nevus in digital images for diagnosis of skin cancer,” IEEE Access, vol. 7,
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 1, March 2024: 1104-1111
1110
pp. 90132–90144, 2019, doi: 10.1109/access.2019.2926837.
[7] M. Perez, J. A. Abisaad, K. D. Rojas, M. A. Marchetti, and N. Jaimes, “Skin cancer: Primary, secondary, and tertiary prevention.
Part I,” Journal of the American Academy of Dermatology, vol. 87, no. 2, pp. 255–268, Aug. 2022, doi:
10.1016/j.jaad.2021.12.066.
[8] K. D. Rojas, M. E. Perez, M. A. Marchetti, A. J. Nichols, F. J. Penedo, and N. Jaimes, “Skin cancer: Primary, secondary, and
tertiary prevention. Part II.,” Journal of the American Academy of Dermatology, vol. 87, no. 2, pp. 271–288, Aug. 2022, doi:
10.1016/j.jaad.2022.01.053.
[9] L. Thomas and S. Puig, “Dermoscopy, digital dermoscopy and other diagnostic tools in the early detection of Melanoma and
follow-up of high-risk skin cancer patients,” Acta Dermato Venereologica, vol. 97, 2017, doi: 10.2340/00015555-2719.
[10] A. Esteva et al., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp.
115–118, Jan. 2017, doi: 10.1038/nature21056.
[11] A. T. Young et al., “The role of technology in melanoma screening and diagnosis,” Pigment Cell & Melanoma Research,
vol. 34, no. 2, pp. 288–300, Aug. 2020, doi: 10.1111/pcmr.12907.
[12] W. Barhoumi and A. Khelifa, “Skin lesion image retrieval using transfer learning-based approach for query-driven distance
recommendation,” Computers in Biology and Medicine, vol. 137, p. 104825, Oct. 2021, doi: 10.1016/j.compbiomed.2021.104825.
[13] B. R. Ray, M. U. Chowdhury, and J. H. Abawajy, “Secure object tracking protocol for the internet of things,” IEEE Internet of
Things Journal, vol. 3, no. 4, pp. 544–553, Aug. 2016, doi: 10.1109/jiot.2016.2572729.
[14] Jatin Borana, “Applications of artificial intelligence & associated technologies,” in Proceeding of International Conference on
Emerging Technologies in Engineering, Biomedical, Management and Science , 2016, pp. 64–67.
[15] M. Sohail et al., “Racial identity-aware facial expression recognition using deep convolutional neural networks,” Applied
Sciences, vol. 12, no. 1, p. 88, Dec. 2021, doi: 10.3390/app12010088.
[16] F. Olayah, E. M. Senan, I. A. Ahmed, and B. Awaji, “AI techniques of dermoscopy image analysis for the early detection of skin
lesions based on combined CNN Features,” Diagnostics, vol. 13, no. 7, p. 1314, Apr. 2023, doi: 10.3390/diagnostics13071314.
[17] W. Salma and A. S. Eltrass, “Automated deep learning approach for classification of malignant melanoma and benign skin
lesions,” Multimedia Tools and Applications, vol. 81, no. 22, pp. 32643–32660, Apr. 2022, doi: 10.1007/s11042-022-13081-x.
[18] S. Tiwari, “Dermatoscopy using multi-layer perceptron, convolution neural network, and capsule network to differentiate
Malignant Melanoma From Benign Nevus,” International Journal of Healthcare Information Systems and Informatics, vol. 16,
no. 3, pp. 58–73, Jul. 2021, doi: 10.4018/ijhisi.20210701.oa4.
[19] L. Alzubaidi et al., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,” Journal of
Big Data, vol. 8, pp. 1–74, Mar. 2021, doi: 10.1186/s40537-021-00444-8.
[20] H.-C. Shin et al., “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics
and transfer learning,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285–1298, May 2016, doi:
10.1109/tmi.2016.2528162.
[21] A. K. Sharma et al., “Dermatologist-level classification of skin cancer using cascaded ensembling of convolutional neural
network and handcrafted features based deep neural network,” IEEE Access, vol. 10, pp. 17920–17932, 2022, doi:
10.1109/access.2022.3149824.
[22] C. Fanconi, “Skin Cancer: Malignant vs. Benign.”. Accessed: Oct. 25, 2023. [Online]. Available:
https://www.kaggle.com/fanconic/skin-cancer-malignant-vs-benign
[23] M. Maksimović, V. Vujović, N. Davidović, V. Milošević, and B. Perišić, “Raspberry Pi as Internet of Things hardware :
Performances and Constraints,” in Design Issues, 2014, vol. 3, no. JUNE, pp. 1–6.
[24] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015, doi:
10.1038/nature14539.
[25] N. T. Duc, Y.-M. Lee, J. H. Park, and B. Lee, “An ensemble deep learning for automatic prediction of papillary thyroid carcinoma
using fine needle aspiration cytology,” Expert Systems with Applications, vol. 188, Feb. 2022, doi: 10.1016/j.eswa.2021.115927.
[26] Z. Alyafeai and L. Ghouti, “A fully-automated deep learning pipeline for cervical cancer classification,” Expert Systems with
Applications, vol. 141, Mar. 2020, doi: 10.1016/j.eswa.2019.112951.
[27] H. Rashid, M. A. Tanveer, and H. Aqeel Khan, “Skin lesion classification using GAN based data augmentation,” in 2019 41st
Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jul. 2019, pp. 916–919, doi:
10.1109/embc.2019.8857905.
[28] J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object detection via region-based fully convolutional networks,” in Advances in Neural
Information Processing Systems, 2016, pp. 379–387.
[29] S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, and J. Gao, “Deep learning-based text classification: A
comprehensive review,” ACM Computing Surveys, vol. 54, no. 3, pp. 1–40, Apr. 2021, doi: 10.1145/3439726.
[30] P. Li, D. Wang, L. Wang, and H. Lu, “Deep visual tracking: Review and experimental comparison,” Pattern Recognition, vol. 76,
pp. 323–338, Apr. 2018, doi: 10.1016/j.patcog.2017.11.007.
[31] W. Fang, P. E. D. Love, H. Luo, and L. Ding, “Computer vision for behaviour-based safety in construction: A review and future
directions,” Advanced Engineering Informatics, vol. 43, Jan. 2020, doi: 10.1016/j.aei.2019.100980.
[32] Y. LeCun et al., “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol. 1, no. 4, pp. 541–551,
Dec. 1989, doi: 10.1162/neco.1989.1.4.541.
[33] Sai Balaji, “Binary Image classifier CNN using TensorFlow,” Techiepedia. 2020. Accessed: Oct. 25, 2023. [Online]. Available:
https://medium.com/techiepedia/binary-image-classifier-cnn-using-tensorflow-a3f5d6746697
[34] M. Z. Islam, M. S. Hossain, R. ul Islam, and K. Andersson, “Static hand gesture recognition using convolutional neural network
with data augmentation,” in 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019
3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), May 2019, pp. 324–329, doi:
10.1109/iciev.2019.8858563.
[35] C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no.
60, pp. 1–48, Jul. 2019, doi: 10.1186/s40537-019-0197-0.
[36] K. Swaraja, “Protection of medical image watermarking,” Journal of Advanced Research in Dynamical and Control Systems, vol.
9, pp. 480–486, 2017.
[37] Y. Liu, Y. Zhou, S. Wen, and C. Tang, “A strategy on selecting performance metrics for classifier evaluation,” International
Journal of Mobile Computing and Multimedia Communications, vol. 6, no. 4, pp. 20–35, Oct. 2014, doi:
10.4018/ijmcmc.2014100102.
[38] J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve.,”
Radiology, vol. 143, no. 1, pp. 29–36, Apr. 1982, doi: 10.1148/radiology.143.1.7063747.
Int J Artif Intell ISSN: 2252-8938 
Embedded artificial intelligence system using deep learning and raspberrypi for the … (Yousra Dahdouh)
1111
[39] S. R. Stefan Jianu, L. Ichim, and D. Popescu, “Automatic diagnosis of skin cancer using neural networks,” in 2019 11th
International Symposium on Advanced Topics in Electrical Engineering (ATEE), Mar. 2019, pp. 1–4, doi:
10.1109/atee.2019.8724938.
[40] M. A. Kassem, K. M. Hosny, and M. M. Fouad, “Skin lesions classification into eight classes for ISIC 2019 Using Deep
Convolutional Neural Network and Transfer Learning,” IEEE Access, vol. 8, pp. 114822–114832, 2020, doi:
10.1109/access.2020.3003890.
[41] S. N. Haghighi, H. Danyali, M. S. Helfroush, and M. H. Karami, “A deep convolutional neural network for melanoma recognition
in dermoscopy images,” in 2020 10th International Conference on Computer and Knowledge Engineering (ICCKE), Oct. 2020,
pp. 453–456, doi: 10.1109/iccke50421.2020.9303684.
[42] J. G. Guarnizo, S. R. Borda, E. C. C. Poveda, and A. M. Rojas, “Automated Malignant Melanoma classification using
convolutional neural networks,” Ciencia e Ingeniería Neogranadina, vol. 32, no. 2, pp. 171–185, Dec. 2022, doi:
10.18359/rcin.6270.
[43] P. N. Srinivasu, J. G. SivaSai, M. F. Ijaz, A. K. Bhoi, W. Kim, and J. J. Kang, “Classification of skin disease using deep learning
neural networks with MobileNet V2 and LSTM,” Sensors, vol. 21, no. 8, p. 2852, Apr. 2021, doi: 10.3390/s21082852.
[44] H. Huang, B. W. Hsu, C. Lee, and V. S. Tseng, “Development of a light‐weight deep learning model for cloud applications and
remote diagnosis of skin cancers,” The Journal of Dermatology, vol. 48, no. 3, pp. 310–316, Nov. 2020, doi: 10.1111/1346-
8138.15683.
BIOGRAPHIES OF AUTHORS
Yousra Dahdouh Department of Informatique, Smart Systems and Emerging
Technologies Laboratory, Faculty of Sciences and Techniques Tangier, Morocco. Dahdouh
Yousra received a Master of Science and Technology degree in Computer Sciences from the
Faculty of Sciences and Techniques, Abdelmalek Essaadi University, Morocco in 2019. She is
currently pursuing a Ph.D. degree with the same faculty. Her research focuses on the
application of artificial intelligence techniques and IoT in the medical domain and healthcare.
She has authored three papers published in international conferences. She can be contacted at
email: dahdouhyousra@gmail.com.
Abdelhakim Boudhir Anouar Department of Informatique, Smart Systems and
Emerging Technologies Laboratory, Faculty of Sciences and Techniques Tangier, Morocco.
Anouar Boudhir Abdelhakim is an Associate Professor at Abdelmalek Essaadi University and
an IEEE member since 2009. He received his Magister of Sciences and Technologies from the
University CADI AYYAD of Marrakesh in 2013, Master and Ph.D. degrees in Computer
Sciences Systems and Networks from the University Abdelmalek Essaadi of Tangier. His
research interests include big data, IoT, security, and networks. He can be contacted at email:
aboudhir@uae.ac.ma.
Mohamed Ben Ahmed Department of Informatique, Smart Systems and
Emerging Technologies Laboratory, Faculty of Sciences and Techniques Tangier, Morocco.
Mohamed Ben Ahmed is currently an Associate Professor at Abdelmalek Essaadi University.
He received his Master of Science and Technology degree in Computer Sciences, the DESA
degree in Telecommunications, and the Ph.D. degree in Computer Sciences and
Telecommunications in 2002, 2005, and 2010, respectively from Abdelmalek Essaadi
University, Morocco. His research interests include big data, IA, networking,
telecommunications, and security. He can be contacted at email: mbenahmed@uae.ac.ma.

More Related Content

PDF
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
PDF
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
PDF
An ensemble framework augmenting surveillance cameras for detecting intruder ...
PDF
An ensemble framework augmenting surveillance cameras for detecting intruder ...
PDF
A NOVEL BIOMETRIC APPROACH FOR AUTHENTICATION IN PERVASIVE COMPUTING ENVIRONM...
PDF
A Novel Biometric Approach for Authentication In Pervasive Computing Environm...
PDF
Advanced Computational Intelligence: An International Journal (ACII)
PDF
A Novel Biometric Approach for Authentication In Pervasive Computing Environm...
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
An ensemble framework augmenting surveillance cameras for detecting intruder ...
An ensemble framework augmenting surveillance cameras for detecting intruder ...
A NOVEL BIOMETRIC APPROACH FOR AUTHENTICATION IN PERVASIVE COMPUTING ENVIRONM...
A Novel Biometric Approach for Authentication In Pervasive Computing Environm...
Advanced Computational Intelligence: An International Journal (ACII)
A Novel Biometric Approach for Authentication In Pervasive Computing Environm...

Similar to Embedded artificial intelligence system using deep learning and raspberrypi for the detection and classification of melanoma (20)

PDF
Malaria Detection System Using Microscopic Blood Smear Image
PDF
PDF
PDF
People Monitoring and Mask Detection using Real-time video analyzing
PDF
Deep Learning based Multi-class Brain Tumor Classification
PDF
Designing of Application for Detection of Face Mask and Social Distancing Dur...
PDF
Melanoma Skin Cancer Detection using Deep Learning
PDF
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
PDF
Object and Currency Detection for the Visually Impaired
PDF
PADDY CROP DISEASE DETECTION USING SVM AND CNN ALGORITHM
PDF
Hand gesture-based automatic door security system using squeeze and excitatio...
PDF
IRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
PPTX
x-RAYS PROJECT
DOCX
Image Recognition Expert System based on deep learning
PDF
Toddler monitoring system in vehicle using single shot detector-mobilenet and...
PDF
Common Skin Disease Diagnosis and Prediction: A Review
PDF
Review on Arduino-Based Face Mask Detection System
PDF
Detecting human fall using internet of things devices for healthcare applicat...
PDF
14. 23759.pdf
PDF
IRJET - Detection of Skin Cancer using Convolutional Neural Network
Malaria Detection System Using Microscopic Blood Smear Image
People Monitoring and Mask Detection using Real-time video analyzing
Deep Learning based Multi-class Brain Tumor Classification
Designing of Application for Detection of Face Mask and Social Distancing Dur...
Melanoma Skin Cancer Detection using Deep Learning
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
Object and Currency Detection for the Visually Impaired
PADDY CROP DISEASE DETECTION USING SVM AND CNN ALGORITHM
Hand gesture-based automatic door security system using squeeze and excitatio...
IRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
x-RAYS PROJECT
Image Recognition Expert System based on deep learning
Toddler monitoring system in vehicle using single shot detector-mobilenet and...
Common Skin Disease Diagnosis and Prediction: A Review
Review on Arduino-Based Face Mask Detection System
Detecting human fall using internet of things devices for healthcare applicat...
14. 23759.pdf
IRJET - Detection of Skin Cancer using Convolutional Neural Network
Ad

More from IAESIJAI (20)

PDF
Hybrid model detection and classification of lung cancer
PDF
Adaptive kernel integration in visual geometry group 16 for enhanced classifi...
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Enhancing fall detection and classification using Jarratt‐butterfly optimizat...
PDF
Deep ensemble learning with uncertainty aware prediction ranking for cervical...
PDF
Event detection in soccer matches through audio classification using transfer...
PDF
Detecting road damage utilizing retinaNet and mobileNet models on edge devices
PDF
Optimizing deep learning models from multi-objective perspective via Bayesian...
PDF
Squeeze-excitation half U-Net and synthetic minority oversampling technique o...
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Exploring DenseNet architectures with particle swarm optimization: efficient ...
PDF
A transfer learning-based deep neural network for tomato plant disease classi...
PDF
U-Net for wheel rim contour detection in robotic deburring
PDF
Deep learning-based classifier for geometric dimensioning and tolerancing sym...
PDF
Enhancing fire detection capabilities: Leveraging you only look once for swif...
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Depression detection through transformers-based emotion recognition in multiv...
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Enhancing financial cybersecurity via advanced machine learning: analysis, co...
PDF
Crop classification using object-oriented method and Google Earth Engine
Hybrid model detection and classification of lung cancer
Adaptive kernel integration in visual geometry group 16 for enhanced classifi...
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Enhancing fall detection and classification using Jarratt‐butterfly optimizat...
Deep ensemble learning with uncertainty aware prediction ranking for cervical...
Event detection in soccer matches through audio classification using transfer...
Detecting road damage utilizing retinaNet and mobileNet models on edge devices
Optimizing deep learning models from multi-objective perspective via Bayesian...
Squeeze-excitation half U-Net and synthetic minority oversampling technique o...
A novel scalable deep ensemble learning framework for big data classification...
Exploring DenseNet architectures with particle swarm optimization: efficient ...
A transfer learning-based deep neural network for tomato plant disease classi...
U-Net for wheel rim contour detection in robotic deburring
Deep learning-based classifier for geometric dimensioning and tolerancing sym...
Enhancing fire detection capabilities: Leveraging you only look once for swif...
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Depression detection through transformers-based emotion recognition in multiv...
A comparative analysis of optical character recognition models for extracting...
Enhancing financial cybersecurity via advanced machine learning: analysis, co...
Crop classification using object-oriented method and Google Earth Engine
Ad

Recently uploaded (20)

PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Encapsulation theory and applications.pdf
PPTX
Spectroscopy.pptx food analysis technology
PPTX
Big Data Technologies - Introduction.pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPT
Teaching material agriculture food technology
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
MYSQL Presentation for SQL database connectivity
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Network Security Unit 5.pdf for BCA BBA.
Understanding_Digital_Forensics_Presentation.pptx
Programs and apps: productivity, graphics, security and other tools
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Review of recent advances in non-invasive hemoglobin estimation
Encapsulation theory and applications.pdf
Spectroscopy.pptx food analysis technology
Big Data Technologies - Introduction.pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
MIND Revenue Release Quarter 2 2025 Press Release
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Digital-Transformation-Roadmap-for-Companies.pptx
Spectral efficient network and resource selection model in 5G networks
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Teaching material agriculture food technology
Chapter 3 Spatial Domain Image Processing.pdf
MYSQL Presentation for SQL database connectivity

Embedded artificial intelligence system using deep learning and raspberrypi for the detection and classification of melanoma

  • 1. IAES International Journal of Artificial Intelligence (IJ-AI) Vol. 13, No. 1, March 2024, pp. 1104~1111 ISSN: 2252-8938, DOI: 10.11591/ijai.v13.i1.pp1104-1111  1104 Journal homepage: http://ijai.iaescore.com Embedded artificial intelligence system using deep learning and raspberrypi for the detection and classification of melanoma Yousra Dahdouh, Abdelhakim Boudhir Anouar, Mohamed Ben Ahmed SSET Research Team, C3S Laboratory, FSTT, Abdelmalek Essaâdi University, Tétouan, Morocco Article Info ABSTRACT Article history: Received Jul 9, 2023 Revised Oct 15, 2023 Accepted Nov 13, 2023 Melanoma is a kind of skin cancer that originates in melanocytes responsible for producing melanin, it can be a severe and potentially deadly form of cancer because it can metastasize to other regions of the body if not detected and treated early. To facilitate this process, Recently, various computer- assisted low-cost, reliable, and accurate diagnostic systems have been proposed based on artificial intelligence (AI) algorithms, particularly deep learning techniques. This work proposed an innovative and intelligent system that combines the internet of things (IoT) with a Raspberry Pi connected to a camera and a deep learning model based on the deep convolutional neural network (CNN) algorithm for real-time detection and classification of melanoma cancer lesions. The key stages of our model before serializing to the Raspberry Pi: Firstly, the preprocessing part contains data cleaning, data transformation (normalization), and data augmentation to reduce overfitting when training. Then, the deep CNN algorithm is used to extract the features part. Finally, the classification part with applied Sigmoid Activation Function. The experimental results indicate the efficiency of our proposed classification system as we achieved an accuracy rate of 92%, a precision of 91%, a sensitivity of 91%, and an area under the curve- receiver operating characteristics (AUC-ROC) of 0.9133. Keywords: Convolutional neural network Deep learning Dermoscopy images Internet of things Melanoma Raspberry Pi Skin cancer This is an open access article under the CC BY-SA license. Corresponding Author: Yousra Dahdouh SSET Research Team, C3S Laboratory, FSTT, Abdelmalek Essaâdi University Tétouan, Morocco Email: dahdouhyousra@gmail.com 1. INTRODUCTION Currently, cancer remains a significant global health challenge [1], [2], especially skin cancer [3], which is considered a leading cause of death in many cases if not detected early and diagnosed correctly. To ensure the success of the diagnosis process, oncologists in general must possess a comprehensive knowledge of the skin's primary layers, including the epidermis, dermis, and subcutaneous fat [4]. When abnormal growth occurs and irregular in the uppermost skin layer, it can give rise to mutations that form tumors, which may be either benign or malignant [5]. Melanoma is considered among the most harmful and deadly kinds of skin cancer that often looks like moles. It’s a dangerous, and deadly variety it grows rapidly and can influence and spread in all areas of the human body [6]. Diagnosing it in its early stages is vitally important, as it can contribute to minimizing the risk factor in patients, improving the prognosis of malignant melanoma, and treating it more easily [7], [8]. Dermoscopy is widely recognized as a common technique for the detection of skin lesions [9]. Nevertheless, the automatic detection of these tumors from dermoscopy images remains a major challenge for public health [10] due to the substantial similarities between melanoma and non-melanoma lesions. Consequently, there is a great demand for the scientific research community's development of new and innovative technologies to
  • 2. Int J Artif Intell ISSN: 2252-8938  Embedded artificial intelligence system using deep learning and raspberrypi for the … (Yousra Dahdouh) 1105 automatically analyze dermatoscopy images and assist oncologists [11], [12]. It is considered the most effective approach for early detection and involves promoting and facilitating more efficient skin monitoring systems through the utilization of advanced internet of things (IoT) techniques [13] and artificial intelligence algorithms [14] to develop automated systems to aid doctors for early and real-time detection. Computer vision is a field of artificial intelligence that focuses on enabling computers to interpret and understand visual information. In this area, the detection and classification of objects are crucial tasks, serving as fundamental building blocks for a wide range of applications. Due to the rise in the capabilities of computing facilities, deep learning [15] is revolutionizing computer vision and contains a set of algorithms based on the human brain's structure such as the neural networks. Deep learning techniques have several advantages for this have emerged as a powerful tool in image segmentation and classification tasks principally in the medical domain of the early diagnosis of melanoma [16]–[18]. We cite, for example the convolutional neural networks (CNN) [19]–[21], are inspired by the human visual cortex used especially in image identification and classification. This research primarily concentrates on the automated detection of melanoma cancer by combining advanced technologies: The IoT and deep learning. To achieve this objective, we have proposed an intelligent and embedded system with a Raspberry Pi designed for the real-time identification and classification of skin lesions captured using an embedded Pi camera and processed by a deep convolutional neural network model as containing an important number and varying types of layers including convolutional layers, pooling layers, and dense layers were meticulously trained using a dataset collected by the international skin imaging collaboration (ISIC) archive, which comprises two distinct classes: benign and malignant. The model is implemented in python using keras and tensorflow frameworks and was created firstly on a computer and after serialized to Raspberry Pi for real-time execution. The main steps of our proposed deep learning model before serializing to the Raspberry Pi are: 1) Passing the input dermoscopy images to the preprocessing step: data cleaning, data transformation (Normalization), and data augmentation to reduce overfitting when training; 2) Extract the features by applying the deep CNN algorithm; and 3) The classification part with the use of a Sigmoid Activation Function. Our experiment results show an accuracy rate of 92%, a precision of 91%, a sensitivity of 91%, and an area under the curve- receiver operating characteristics (AUC-ROC) of 0.9133. We hope our proposed system will help to diagnose, detect, and classify this melanoma cancer more efficiently way than other methods that have already been used. The major contributions of this study include: 1) A novel automated system is suggested for the classification of skin cancer in real-time, utilizing a combination of an Internet of Things device (Raspberry Pi) attached to a camera and an artificial intelligence model precise and optimized (deep CNN); 2) We improved the model efficiently with an increase in the dermoscopy images using augmentation techniques; 3) The architecture provides high accuracy (92%) with the best performance metrics obtained using a deep algorithm containing many layers with different parameters and a Sigmoid function for classification. The rest of the article is organized as follows: In section 2 We presented the proposed method with a brief overview of the dataset, hardware, and software system used in this paper. Section 3 contains the experiment results and evaluation with a little discussion. Finally, section 4 describes the conclusion, followed by the references. 2. METHOD 2.1. Dataset To implement our CNN model, we need the data for training. In this research, a skin cancer dataset [22] consisting of a balanced number of dermoscopic images of benign skin cancer moles and malignant ones was compiled from the ISIC archive. The dataset is segmented into 2,637 images in the training set and 660 images in the test. After collecting the dataset, we preprocessed all images by sizes of 128×128 pixels to obtain the same size as all the images. The dataset details are illustrated in Table 1. Table 1. Dataset description Class Benign/malignant Number of images class (benign) 1,800 Number of images class (malignant) 1,497 Total number of images 3.297 Dimension after preprocessing 128×128 pixels 2.2. Hardware system The hardware used for this research is the Raspberry Pi, a miniature personal computer, approximately the size of a credit card, introduced first time in the year 2012 [23], with an SD Card as a hard
  • 3.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 1, March 2024: 1104-1111 1106 disk to stock the features of our model. The camera module is connected to the camera port of the Raspberry Pi for capturing an image of the lesion and a power cable for the power supply, as well as, a keyboard, HDMI display, and LCD display to view the result of our model. 2.3. Software design The software in this research consists firstly of preparing the operating system we have used Raspbian which is optimized for Raspberry Pi hardware and is based on debian linux. secondly, to build a detection skin cancer system using a Raspberry Pi, supporting software is needed with jupyter notebook to write the scripts in python, one of the commonly languages used for deep learning tasks with open-source frameworks: tensorflow. we have used the open-source computer vision library OpenCV which includes a large selection of algorithms that can help us to apply our model. Other libraries are used like scikit-learn. About the skin cancer model detection and classification system that has been trained on a laptop and serialized after to the Raspberry Pi, we have applied deep learning [24] is a part of the family of artificial intelligence [25] uses a multilayer approach; these layers are connected to extract features from the source data [26]. Every artificial neural network has layers, the higher the number of this layers, the deeper and more powerful the network. Recently, deep learning has been used frequently in various areas of computer vision including image classification, voice recognition, object detection, semantic segmentation [27]–[29], and other tasks [30] and achieved the best results in detection and classification tasks, particularly for the classification of medical images. For these reasons, we used it in this study specifically convolutional neural networks, which are a kind of neural network applied in deep learning mostly to analyze image or visual data [31]. The CNN was first proposed by LeCun et al. [32]. The CNN architecture has multiple layers [33], (see details in section 2.4 about our deep CNN proposed). 2.4. Proposed methodology The principal goal of this study is the production of an automatic system-friendly, lightweight, and simplified setup with low hardware resources for reduced cost and energy and facilitated connection, that will be applied in all aspects of healthcare specifically for the detection and classification of skin diseases. The main idea of our work is to implement the basic embedded system to aid doctors in melanoma cancer detection and make better medical decisions in real time. This system proposed applied an innovative combination: artificial intelligence specifically the deep learning algorithm and the IoT devices. This combination is a state-of-the-art and attractive solution for building a melanoma skin cancer classification system. The novelty of this research is evident in: − Among the first research that as far as i know used an approach that combined artificial intelligence (AI) and the IoT to implement a complete, intelligent, and automatic melanoma classification system. − Developing an efficient deep learning model with multiple layers deployed on an IoT device Raspberry Pi for skin cancer classification into 'melanoma' and 'benign' classes. − Since the ISIC archive is one of the most extensive open source databases accessible, this paper addresses evaluating the performance of dermoscopic images with our model CNN. The process of our methodology goes through three main steps each step consists of several tasks: artificial intelligence part, Raspberry Pi part, and deployment of the model on Raspberry Pi. In this section, we will present the key elements of our AI model for detecting and classifying skin cancer using dermoscopy images. We have built a model using the CNN algorithm with various layers for extracting the maximum of features and tried a collection of activation functions in the classification layer to get the best accuracy. As illustrated in Figure 1, our proposed model contains a preprocessing step including resizing images, data cleaning, data transformation (normalization), and data augmentation to increase accuracy. Additionally, we employed the deep CNN algorithm to extract the features of the lesion. After that, three activation functions (softmax, sigmoid, switch) have been tried to classify the image into: benign and melanoma until we found the best result. In the end, we have trained our model with optimal hyper-parameters. Figure 1. The architecture of the proposed method
  • 4. Int J Artif Intell ISSN: 2252-8938  Embedded artificial intelligence system using deep learning and raspberrypi for the … (Yousra Dahdouh) 1107 The proposed methodology is illustrated in Figure 1 and has been described in detail below: a. Input image: Our model is trained by using a dataset contains of 3,297 images separated into two classes. b. Preprocessing: After loading and reading the data all images were resized to 128×128 pixels to make the computation and training faster. The major steps of the preprocessing are image improvement and noise removal. we have used a median filter to reduce the amount of intensity variation between one pixel and the other pixel. Islam et al. [34] then, we normalized the pixel values to a [0,1] range to reduce the space of variation of the values of a feature. Finally, we apply the Train_Test_Split technique to randomly split data. Where 80% are used for training, and 20% are used for testing. c. Data augmentation: is an AI technique for generating new data from existing data this technique is used to train a model properly, increase the classifier’s efficiency and the accuracy, also to addressing class imbalance problems, reducing overfitting problems, and improving convergence [35]. In the absence of a large number of datasets, and since we are working with a finite quantity of data to train our CNN model we have used the augmentation techniques to augment our data using the Keras library with the ImageDataGenerator function. We applied: Rotation_range=30, Width_shift_range=0.1, Height_shift_range=0.1, Horizontal_flip=True, Vertical_flip=True, Zoom_range=0.1. d. Feature extraction & classification: After the preprocessing step, we used the deep CNN algorithm to extract features and classify the skin cancer. About the feature extraction task, the first part of the algorithm is used to split the image's points into several subsets such as area, points [36] our proposed system uses the deep CNN network to extract the features and classify them. This network is trained from scratch to learn the optimal weights of the network. The second part of our CNN model aims to perform classification among two types of skin lesions with applied different kinds of activation functions in the dense layer as classifiers including softmax, sigmoid, and swish. In the proposed method a CNN model from scratch was applied. The scratch model provides good performance and accuracy. We have used the multiple convolution layers to detect more complex features, each of the convolution layers when fed with an image will produce many activation maps, which emphasize important image features. After, the output of the initial layer is given to the next layer as input, where complex features are extracted. A batch normalization and a max-pooling layers were used to prevent initial random weight bias. We applied the filters with different parameters. Finally, we combined these features to make the classification. After that, there are four fully-connected layers used for classification by testing numerous activation functions: the softmax, sigmoid, and swish functions. The proposed deep CNN model illustrated in Figure 2. Figure 2. Proposed deep CNN model architecture 3. RESULTS AND DISCUSSION 3.1. Experiments and results For measuring our architecture’s performance was evaluated and validated using several metrics: Accuracy, Recall/ Sensitivity, Precision, F1 Score [37]. Also, the performance is assessed for this binary classification problem using the AUC and ROC-AUC metrics [38]. The receiver operating characteristic (ROC) curve is a graphical representation that plots the sensitivity against the specificity at various classification thresholds. The AUC is the measure of the performance of a binary classification model, often in the context of a ROC curve.
  • 5.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 1, March 2024: 1104-1111 1108 We trained our model with used the google colab platform graphics processing unit (GPU). The model was implemented on the TensorFlow framework with open-source Keras packages. For training, we used the technique Train_Test_Split to randomly split data. And, we compiled our model many times with several hyperparameters such as ADAM, RMSprop, and stochastic gradient descent, Nadam as optimizer with a learning rate of 0.001, 0.0001, 0.00001, the batch size was 32, 64, 128, and epohs of 50, 100, and 150 with ''binary_crossentropy'' as the loss function until we find a good result. Figure 3(a) illustrates the training and test accuracy of our model, while Figure 3(b) describes the training and test loss. Following the completion of training, we observed a slight overfitting pattern, which can be attributed to the characteristics of the dataset used. The following Table 2 shows the results obtained from the model proposed. (a) (b) Figure 3. The test resulf of the proposed method on: (a) accuracy, and (b) loss graph Table 2. Results of model proposed Algorithm Optimizer LR F1 Score Precision Accuracy DeepCNN Adam 0.01 0.8944 0.8934 0.8958 RMSProp SGD 0.0001 0.00001 0.8808 0.9076 0.8802 0.9073 0.8826 0.91 Adam 0.00001 0.9129 0.9125 0.92 After multiple and several times of tunings, we achieved the accuracy of 92% with the following hyperparameters: Optimizer: Adam, Learning rate: 0.00001, Dropout: 0.5, Batch size: 32, Epochs: 100. The idea is to decide lesion cancer by using the embedded system, for this, the classification model obtained was serialized and copied to the Raspberry Pi. Then, these results show the effective power of utilizing a deep learning model, especially deep CNN Integrate in IOT systems. Figure 4 presents the confusion matrix of our model, the size of our confusion matrix 2×2. The accuracy of classification is 92% and the AUC value obtained is 0.9133 (shown in Figure 5). In comparison to models found in other research as illustrated in Table 3 (details available in section 3.2), our proposed method has the best accuracy metrics due to our great and efficient choice of the number of layers and convolutional filters and pooling layers to extract sufficient features, and also the delicate chooses the right activation function as a classifier to obtain in end highly- accurate skin lesion classifier. Figure 4. Confusion matrix Figure 5. The ROC curve of our model
  • 6. Int J Artif Intell ISSN: 2252-8938  Embedded artificial intelligence system using deep learning and raspberrypi for the … (Yousra Dahdouh) 1109 Table 3. Comparison with the existing work Reference Model Accuracy (%) [39] CNN with 9 layers 80.52 [40] Modified GoogleNet 81 [41] CNN+SVM 89.52 [42] CNN 87.82 [43] MobilenetV2-LSTM 90.72 [44] DenseNet-EfficientNet 85.80 Proposed Deep CNN 92 3.2. Discussion There is a several works published in the context of melanoma diagnosis using different techniques among them: In this article, the advanced technologies based on the IoT and deep learning are proposed. This deep learning architecture proposed is based on the deep CNN algorithm implemented from scratch with an important number and varying types of layers (see details in section 2.4). After the training step, this model is serialized to the Raspberry Pi to facilitate diagnosis of skin cancer in real-time. we achieved an accuracy of 92%. We compared the results that we obtained with those obtained by other authors using other techniques. Jianu et al. [39] proposed a system for classification melanoma using cnn algorithm with great architecture containing nine layers. Two steps are important: Preprocessing and CNN Architecture. This model gives a robust result of 80.52%. Kassem et al. [40] for classification skin lesions the authors presented a GoogleNet model with some modification at the level of filters and layers (add and replace layer by another layer). Their proposed model gives 81% accuracy compared with the original GoogleNet which achieved 63% accuracy. Haghighi et al. [41] a system for melanoma recognition is proposed based on fusion with CNN and support vector machine (SVM). The CNN architecture is used to extract features and a SVM is used as a classifier. In this study, the author first applied data augmentation. Secondly, extract features with CNN architecture. After that, a fully connected layer, a dropout layer, and a rectified linear unit (ReLU) layer are added. Finally, the classification stage used the SVM classifier. The authors obtained an accuracy of 89.52%. Guarnizo et al. [42] proposed a model for diagnosis skin cancer using the deep learning algorithm CNN. This CNN model contains a lot of hidden layers. In this work, the authors applied the rectified linear activation function (ReLU) to every convolutional layer. And about the classification part, they used the activation function sigmoid, they found an average accuracy of 87.82 %. Srinivasu et al. [43] the authors proposed an efficient model that can work on lightweight computational devices based on Mobile Net V2 and long short- term memory (LSTM) to classify skin disease. They obtained an accuracy of more than 85%. Huang et al. [44] the authors presented a deep learning method for the classification of skin cancer based with Dense Net and Efficient Net. They obtained an accuracy of 85.8%. 4. CONCLUSION Skin diseases have become increasingly prevalent in many regions. This paper aims to create an intelligent system based on the combination of the Internet of Things and deep learning algorithms for the real-time diagnosis of melanoma lesions where it was applied the data augmentation method for increasing the number of images, reducing the overfitting problem, and improving convergence. We have found the best result with an accuracy of 92% and can refine the system's performance by expanding the dataset size and exploring advanced preprocessing techniques. Through comparing this study with previous empirical studies, (see section discussion), our findings suggest that our model represents one of the most effective solutions for decision-making quickly in the healthcare sector, particularly in medical image applications, such as the diagnosis of skin cancer. Furthermore, there is room for further enhancement by incorporating more specific and meticulously curated datasets and developing new layers in deep learning algorithms. REFERENCES [1] R. L. Siegel, K. D. Miller, H. E. Fuchs, and A. Jemal, “Cancer statistics, 2022,” CA: A Cancer Journal for Clinicians, vol. 72, no. 1, pp. 7–33, Jan. 2022, doi: 10.3322/caac.21708. [2] W. Li, A. N. Joseph Raj, T. Tjahjadi, and Z. Zhuang, “Digital hair removal by deep learning for skin lesion segmentation,” Pattern Recognition, vol. 117, p. 107994, Sep. 2021, doi: 10.1016/j.patcog.2021.107994. [3] H. Sung et al., “Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA: A Cancer Journal for Clinicians, vol. 71, no. 3, pp. 209–249, Feb. 2021, doi: 10.3322/caac.21660. [4] R. Gordon, “Skin Cancer: An Overview of Epidemiology and Risk Factors,” Seminars in Oncology Nursing, vol. 29, no. 3, pp. 160–169, Aug. 2013, doi: 10.1016/j.soncn.2013.06.002. [5] G. N. K. Babu and V. J. Peter, “Skin cancer detection using support vector machine with histogram of oriented gradients features,” ICTACT Journal on Soft Computing, vol. 11, no. 2, pp. 2301–2305, 2021, doi: 10.21917/ijsc.2021.0329. [6] M. Q. Khan et al., “Classification of Melanoma and Nevus in digital images for diagnosis of skin cancer,” IEEE Access, vol. 7,
  • 7.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 1, March 2024: 1104-1111 1110 pp. 90132–90144, 2019, doi: 10.1109/access.2019.2926837. [7] M. Perez, J. A. Abisaad, K. D. Rojas, M. A. Marchetti, and N. Jaimes, “Skin cancer: Primary, secondary, and tertiary prevention. Part I,” Journal of the American Academy of Dermatology, vol. 87, no. 2, pp. 255–268, Aug. 2022, doi: 10.1016/j.jaad.2021.12.066. [8] K. D. Rojas, M. E. Perez, M. A. Marchetti, A. J. Nichols, F. J. Penedo, and N. Jaimes, “Skin cancer: Primary, secondary, and tertiary prevention. Part II.,” Journal of the American Academy of Dermatology, vol. 87, no. 2, pp. 271–288, Aug. 2022, doi: 10.1016/j.jaad.2022.01.053. [9] L. Thomas and S. Puig, “Dermoscopy, digital dermoscopy and other diagnostic tools in the early detection of Melanoma and follow-up of high-risk skin cancer patients,” Acta Dermato Venereologica, vol. 97, 2017, doi: 10.2340/00015555-2719. [10] A. Esteva et al., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, Jan. 2017, doi: 10.1038/nature21056. [11] A. T. Young et al., “The role of technology in melanoma screening and diagnosis,” Pigment Cell & Melanoma Research, vol. 34, no. 2, pp. 288–300, Aug. 2020, doi: 10.1111/pcmr.12907. [12] W. Barhoumi and A. Khelifa, “Skin lesion image retrieval using transfer learning-based approach for query-driven distance recommendation,” Computers in Biology and Medicine, vol. 137, p. 104825, Oct. 2021, doi: 10.1016/j.compbiomed.2021.104825. [13] B. R. Ray, M. U. Chowdhury, and J. H. Abawajy, “Secure object tracking protocol for the internet of things,” IEEE Internet of Things Journal, vol. 3, no. 4, pp. 544–553, Aug. 2016, doi: 10.1109/jiot.2016.2572729. [14] Jatin Borana, “Applications of artificial intelligence & associated technologies,” in Proceeding of International Conference on Emerging Technologies in Engineering, Biomedical, Management and Science , 2016, pp. 64–67. [15] M. Sohail et al., “Racial identity-aware facial expression recognition using deep convolutional neural networks,” Applied Sciences, vol. 12, no. 1, p. 88, Dec. 2021, doi: 10.3390/app12010088. [16] F. Olayah, E. M. Senan, I. A. Ahmed, and B. Awaji, “AI techniques of dermoscopy image analysis for the early detection of skin lesions based on combined CNN Features,” Diagnostics, vol. 13, no. 7, p. 1314, Apr. 2023, doi: 10.3390/diagnostics13071314. [17] W. Salma and A. S. Eltrass, “Automated deep learning approach for classification of malignant melanoma and benign skin lesions,” Multimedia Tools and Applications, vol. 81, no. 22, pp. 32643–32660, Apr. 2022, doi: 10.1007/s11042-022-13081-x. [18] S. Tiwari, “Dermatoscopy using multi-layer perceptron, convolution neural network, and capsule network to differentiate Malignant Melanoma From Benign Nevus,” International Journal of Healthcare Information Systems and Informatics, vol. 16, no. 3, pp. 58–73, Jul. 2021, doi: 10.4018/ijhisi.20210701.oa4. [19] L. Alzubaidi et al., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,” Journal of Big Data, vol. 8, pp. 1–74, Mar. 2021, doi: 10.1186/s40537-021-00444-8. [20] H.-C. Shin et al., “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285–1298, May 2016, doi: 10.1109/tmi.2016.2528162. [21] A. K. Sharma et al., “Dermatologist-level classification of skin cancer using cascaded ensembling of convolutional neural network and handcrafted features based deep neural network,” IEEE Access, vol. 10, pp. 17920–17932, 2022, doi: 10.1109/access.2022.3149824. [22] C. Fanconi, “Skin Cancer: Malignant vs. Benign.”. Accessed: Oct. 25, 2023. [Online]. Available: https://www.kaggle.com/fanconic/skin-cancer-malignant-vs-benign [23] M. Maksimović, V. Vujović, N. Davidović, V. Milošević, and B. Perišić, “Raspberry Pi as Internet of Things hardware : Performances and Constraints,” in Design Issues, 2014, vol. 3, no. JUNE, pp. 1–6. [24] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015, doi: 10.1038/nature14539. [25] N. T. Duc, Y.-M. Lee, J. H. Park, and B. Lee, “An ensemble deep learning for automatic prediction of papillary thyroid carcinoma using fine needle aspiration cytology,” Expert Systems with Applications, vol. 188, Feb. 2022, doi: 10.1016/j.eswa.2021.115927. [26] Z. Alyafeai and L. Ghouti, “A fully-automated deep learning pipeline for cervical cancer classification,” Expert Systems with Applications, vol. 141, Mar. 2020, doi: 10.1016/j.eswa.2019.112951. [27] H. Rashid, M. A. Tanveer, and H. Aqeel Khan, “Skin lesion classification using GAN based data augmentation,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jul. 2019, pp. 916–919, doi: 10.1109/embc.2019.8857905. [28] J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object detection via region-based fully convolutional networks,” in Advances in Neural Information Processing Systems, 2016, pp. 379–387. [29] S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, and J. Gao, “Deep learning-based text classification: A comprehensive review,” ACM Computing Surveys, vol. 54, no. 3, pp. 1–40, Apr. 2021, doi: 10.1145/3439726. [30] P. Li, D. Wang, L. Wang, and H. Lu, “Deep visual tracking: Review and experimental comparison,” Pattern Recognition, vol. 76, pp. 323–338, Apr. 2018, doi: 10.1016/j.patcog.2017.11.007. [31] W. Fang, P. E. D. Love, H. Luo, and L. Ding, “Computer vision for behaviour-based safety in construction: A review and future directions,” Advanced Engineering Informatics, vol. 43, Jan. 2020, doi: 10.1016/j.aei.2019.100980. [32] Y. LeCun et al., “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol. 1, no. 4, pp. 541–551, Dec. 1989, doi: 10.1162/neco.1989.1.4.541. [33] Sai Balaji, “Binary Image classifier CNN using TensorFlow,” Techiepedia. 2020. Accessed: Oct. 25, 2023. [Online]. Available: https://medium.com/techiepedia/binary-image-classifier-cnn-using-tensorflow-a3f5d6746697 [34] M. Z. Islam, M. S. Hossain, R. ul Islam, and K. Andersson, “Static hand gesture recognition using convolutional neural network with data augmentation,” in 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), May 2019, pp. 324–329, doi: 10.1109/iciev.2019.8858563. [35] C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 60, pp. 1–48, Jul. 2019, doi: 10.1186/s40537-019-0197-0. [36] K. Swaraja, “Protection of medical image watermarking,” Journal of Advanced Research in Dynamical and Control Systems, vol. 9, pp. 480–486, 2017. [37] Y. Liu, Y. Zhou, S. Wen, and C. Tang, “A strategy on selecting performance metrics for classifier evaluation,” International Journal of Mobile Computing and Multimedia Communications, vol. 6, no. 4, pp. 20–35, Oct. 2014, doi: 10.4018/ijmcmc.2014100102. [38] J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve.,” Radiology, vol. 143, no. 1, pp. 29–36, Apr. 1982, doi: 10.1148/radiology.143.1.7063747.
  • 8. Int J Artif Intell ISSN: 2252-8938  Embedded artificial intelligence system using deep learning and raspberrypi for the … (Yousra Dahdouh) 1111 [39] S. R. Stefan Jianu, L. Ichim, and D. Popescu, “Automatic diagnosis of skin cancer using neural networks,” in 2019 11th International Symposium on Advanced Topics in Electrical Engineering (ATEE), Mar. 2019, pp. 1–4, doi: 10.1109/atee.2019.8724938. [40] M. A. Kassem, K. M. Hosny, and M. M. Fouad, “Skin lesions classification into eight classes for ISIC 2019 Using Deep Convolutional Neural Network and Transfer Learning,” IEEE Access, vol. 8, pp. 114822–114832, 2020, doi: 10.1109/access.2020.3003890. [41] S. N. Haghighi, H. Danyali, M. S. Helfroush, and M. H. Karami, “A deep convolutional neural network for melanoma recognition in dermoscopy images,” in 2020 10th International Conference on Computer and Knowledge Engineering (ICCKE), Oct. 2020, pp. 453–456, doi: 10.1109/iccke50421.2020.9303684. [42] J. G. Guarnizo, S. R. Borda, E. C. C. Poveda, and A. M. Rojas, “Automated Malignant Melanoma classification using convolutional neural networks,” Ciencia e Ingeniería Neogranadina, vol. 32, no. 2, pp. 171–185, Dec. 2022, doi: 10.18359/rcin.6270. [43] P. N. Srinivasu, J. G. SivaSai, M. F. Ijaz, A. K. Bhoi, W. Kim, and J. J. Kang, “Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM,” Sensors, vol. 21, no. 8, p. 2852, Apr. 2021, doi: 10.3390/s21082852. [44] H. Huang, B. W. Hsu, C. Lee, and V. S. Tseng, “Development of a light‐weight deep learning model for cloud applications and remote diagnosis of skin cancers,” The Journal of Dermatology, vol. 48, no. 3, pp. 310–316, Nov. 2020, doi: 10.1111/1346- 8138.15683. BIOGRAPHIES OF AUTHORS Yousra Dahdouh Department of Informatique, Smart Systems and Emerging Technologies Laboratory, Faculty of Sciences and Techniques Tangier, Morocco. Dahdouh Yousra received a Master of Science and Technology degree in Computer Sciences from the Faculty of Sciences and Techniques, Abdelmalek Essaadi University, Morocco in 2019. She is currently pursuing a Ph.D. degree with the same faculty. Her research focuses on the application of artificial intelligence techniques and IoT in the medical domain and healthcare. She has authored three papers published in international conferences. She can be contacted at email: dahdouhyousra@gmail.com. Abdelhakim Boudhir Anouar Department of Informatique, Smart Systems and Emerging Technologies Laboratory, Faculty of Sciences and Techniques Tangier, Morocco. Anouar Boudhir Abdelhakim is an Associate Professor at Abdelmalek Essaadi University and an IEEE member since 2009. He received his Magister of Sciences and Technologies from the University CADI AYYAD of Marrakesh in 2013, Master and Ph.D. degrees in Computer Sciences Systems and Networks from the University Abdelmalek Essaadi of Tangier. His research interests include big data, IoT, security, and networks. He can be contacted at email: aboudhir@uae.ac.ma. Mohamed Ben Ahmed Department of Informatique, Smart Systems and Emerging Technologies Laboratory, Faculty of Sciences and Techniques Tangier, Morocco. Mohamed Ben Ahmed is currently an Associate Professor at Abdelmalek Essaadi University. He received his Master of Science and Technology degree in Computer Sciences, the DESA degree in Telecommunications, and the Ph.D. degree in Computer Sciences and Telecommunications in 2002, 2005, and 2010, respectively from Abdelmalek Essaadi University, Morocco. His research interests include big data, IA, networking, telecommunications, and security. He can be contacted at email: mbenahmed@uae.ac.ma.