Academic Editors: Pedro Couto and
Andrea Prati
Received: 15 December 2024
Revised: 18 January 2025
Accepted: 31 January 2025
Published: 3 February 2025
Citation: Almtireen, N.; Reddy, V.;
Sutton, M.; Nedvidek, A.; Karn, C.;
Ryalat, M.; Elmoaqet, H.; Rawashdeh,
N. PLC-Controlled Intelligent
Conveyor System with AI-Enhanced
Vision for Efficient Waste Sorting.
Appl. Sci. 2025, 15, 1550. https://
doi.org/10.3390/app15031550
Copyright: © 2025 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license
(https://creativecommons.org/
licenses/by/4.0/).
Article
PLC-Controlled Intelligent Conveyor System with AI-Enhanced
Vision for Efficient Waste Sorting
Natheer Almtireen 1,* , Viraj Reddy 2, Max Sutton 2, Alexander Nedvidek 2 , Caden Karn 2, Mutaz Ryalat 1 ,
Hisham Elmoaqet 1 and Nathir Rawashdeh 1,2,*
1 Department of Mechatronics Engineering, German Jordanian University, Amman 11180, Jordan;
mutaz.ryalat@gju.edu.jo (M.R.); hisham.elmoaqet@gju.edu.jo (H.E.)
2 Department of Applied Computing, Michigan Technological University, Houghton, MI 49931, USA;
greddy1@mtu.edu (V.R.); maxsutto@mtu.edu (M.S.); adnedvid@mtu.edu (A.N.); ctkarn@mtu.edu (C.K.)
* Correspondence: natheer.almtireen@gju.edu.jo (N.A.); nathir.rawashdeh@gju.edu.jo or
narawash@mtu.edu (N.R.)
Abstract: Current waste sorting mechanisms, particularly those relying on manual pro-
cesses, semi-automated systems, or technologies without Artificial Intelligence (AI) inte-
gration, are hindered by inefficiencies, inaccuracies, and limited scalability, reducing their
effectiveness in meeting growing waste management demands. This study introduces a
prototype waste sorting machine that integrates an AI-driven vision system with a Pro-
grammable Logic Controller (PLC) for high-accuracy automated waste sorting. The system,
powered by the YOLOv8 deep learning model, achieved sorting accuracies of 88% for
metal cans, 75% for paper, and 91% for plastic bottles, with an overall precision of 90%, a
recall of 80%, and a mean average precision (mAP50) of 86%. The vision system provides
real-time classification, while the PLC manages conveyor and actuator operations to ensure
seamless sorting. Experimental results in a controlled environment validate the system’s
high accuracy, minimal processing delays, and scalability for industrial recycling appli-
cations. This innovative integration of AI vision with PLC automation enhances sorting
efficiency, reduces ecological impacts, and minimizes labor dependency. Furthermore, the
system aligns with sustainable waste management practices, promoting circular economy
principles and advancing the Sustainable Development Goals (SDGs).
Keywords: waste sorting; machine vision; sustainability; PLC; green technology;
artificial intelligence
1. Introduction and Background
1.1. Global Challenges in Waste Management
In recent years, the demand for effective waste management has grown due to chal-
lenges related to the increasing volume of Municipal Solid Waste (MSW). The interplay
of rapid urbanization and industrialization, along with inadequate waste management
practices, has markedly exacerbated various environmental issues. Global waste generation
is increasing at an alarming rate and shows no indication of a decline. In 2020, the world
produced approximately 2.1 billion tons of MSW annually, a figure that is expected to
increase by 56% by 2050, reaching 3.8 billion tons if no urgent measures are implemented;
refer to Figure 1. This dramatic increase in waste generation places immense pressure on
governments and authorities around the world to implement sustainable waste manage-
ment practices that effectively mitigate environmental damage while addressing the social
and economic implications of waste generation [1].
Appl. Sci. 2025, 15, 1550 https://doi.org/10.3390/app15031550
Appl. Sci. 2025, 15, 1550 2 of 26
2.126
2.684
3.229
3.782
0
0.5
1
1.5
2
2.5
3
3.5
4
Waste
(Billions
of
tonnes)
2020 2030 2040 2050
Figure 1. Projected global MSW generation (in billions of tons) for the years 2020, 2030, 2040, and 2050,
if immediate and effective waste management interventions are not implemented [1].
Currently, the demand for effective waste management has grown due to the chal-
lenges related to the increasing volume of recyclable materials. Rapid urbanization and
industrialization have resulted in a dramatic increase in MSW, and the inadequate han-
dling of these wastes contributes to significant environmental issues such as air and water
pollution, loss of biodiversity, and climate change [1]. Countries undergoing urbanization
and industrialization, with shifts in housing and consumption patterns along with access to
a broader array of products, leads to an increase in per capita waste production. This trend
is illustrated in Figure 2, which shows a clear relationship between higher Gross Domestic
Product (GDP) and increased waste generation per capita [1].
1000
500
0
0 30,000 60,000 90,000 120,000
GDPper capita (purchasing power parity – constant 2017 international USD)
Waste
(kg/person/year)
1500
Figure 2. Relationship between GDP per capita (purchasing power parity–constant 2017 international
USD) and waste generation (kg/person/year) available between 2010 and 2020 [1].
The United States, for example, generated approximately 12% of global MSW in 2018,
despite representing less than 5% of the global population. This places the United States
among the highest per capita waste producers, with an average American generating
more than 800 kilograms of waste per year. This disproportionate waste generation in
wealthy countries reflects the impact of economic growth, industrialization, urbanization,
and consumption and housing patterns on increasing waste volumes [1]. In addition,
disparities in the generation of waste per capita between regions highlight the inequality in
waste generation.
Appl. Sci. 2025, 15, 1550 3 of 26
Figure 3 indicates that although North America and Central and South Asia gener-
ate similar total amounts of MSW per capita, waste production varies significantly. This
suggests that economic factors and population growth are key drivers of MSW generation.
As urbanization accelerates in middle-income nations, the challenges of waste management
intensify, particularly in areas without adequate waste disposal systems [1]. Singh et al. [2]
highlighted the main shortcomings in India’s urban approach to solid waste management,
including overloaded landfills, inefficient waste segregation, and inadequate policy en-
forcement, highlighting the need for technological innovations and greater participation of
the private sector to improve waste processing and recycling efficiency.
700
300
500
400
200
100
0
2.5
600
2.0
1.5
1.0
0.5
0.0
Total
municipal
solid
waste
(million
tonnes)
Municipal
solid
waste
per
capita
(kg/person/day)
Total municipal solid waste (million tonnes) Municipal solid waste per capita (kg/person/day)
Figure 3. Regional MSW generation: total MSW (million tons) and MSW per capita (kg/person/day) [1].
1.2. Towards Sustainable Waste Management
Analyzing disparities in the generation of waste per capita highlights the significant
role of waste management in global sustainability. This underscores the necessity of efficient
waste management systems, which are crucial for environmental and social equity and are
also essential for improving recycling efficiency and minimizing dependence on landfills.
This aligns with social and environmental justice, particularly in line with the Sustainable
Development Goals (SDGs) [1]. Effective waste management driven by public authorities,
corporations, and civil society affects SDG 11 on sustainable cities, SDG 12 on responsible
consumption, and SDG 13 on climate action. Inadequate waste management exacerbates
inequality and impacts vulnerable populations through pollution and poor infrastructure.
Hence, waste management plays an essential role in achieving multiple SDGs, emphasizing
the integration of sustainability into waste governance [1]. These insights highlight the
necessity of enhancing the circular economy transition, using Artificial Intelligence (AI)
and automation technologies to achieve these goals.
The circular economy concept focuses on preserving the value and quality of products,
components, and materials beyond their initial use phase [3]. The momentum behind
the circular economy has grown among scholars and practitioners, although varied in-
terpretations persist. Kirchherr et al. [4] highlighted that the circular economy is often
characterized by reducing, reusing, and recycling activities. Shifting from our existing
linear production and consumption models to a circular economy is considered vital to
achieving sustainability objectives and sustaining industrial competitiveness. This is a
powerful tool for climate change mitigation by significantly reducing carbon emissions in
heavy industries such as steel, plastics, aluminum, and cement, which are responsible for
Appl. Sci. 2025, 15, 1550 4 of 26
substantial greenhouse gas emissions [5]. A report by Material Economics [5] suggested
that by making the best use of existing materials, the European Union could achieve annual
CO2 reductions of up to 296 million tons by 2050, substantially contributing to its target of
net zero emissions.
1.3. Sensor-Based Systems to AI: Advancing Waste Sorting
Traditional automated sorting systems, which predominantly utilize programmable
logic controllers (PLCs) linked to sensor networks, face challenges in processing mixed or
unorganized waste streams. PLCs, extensively used in the domain of industrial automa-
tion, conventionally employ devices such as proximity sensors, limit switches, and push
buttons to obtain electrical input from the physical environment. Although these sensors
exhibit a high degree of efficiency in detecting the presence and proximity of objects, they
demonstrate limited capability to differentiate between different materials such as plastic,
metal, and paper. In mixed waste streams, the inability to distinguish between dissimilar
materials obstructs the sorting process, thereby complicating the precise segregation of recy-
clable waste. In addition, the diversity of geometries, dimensions, or constituent materials
presented by such waste streams makes conventional sensor-based systems insufficient for
current requirements.
Bozma et al. [6] concluded that conventional sensor-based systems are ineffective
in classifying objects with arbitrary shapes and orientations in industrial settings, which
requires sophisticated vision systems capable of real-time classification, especially for
complex sorting operations. Kazemi et al. [7] demonstrated the effectiveness of integrating
vision systems with PLCs for industrial pick-and-place robots, highlighting improvements
in object detection accuracy, real-time processing, and the flexibility of PLC-controlled
robotic systems in sorting tasks.
One of the critical challenges in modern waste management is efficiently sorting
and recycling waste, particularly in urban and industrial settings, where the volume and
complexity of waste streams continue to grow. Through their study on the influence of AI in
waste sorting, Wilts et al. [8] illustrated that integrating AI into waste management systems
considerably improves the efficiency of recycling and the purity of the material. The “ZRR
for municipal waste project” underscores the tangible effects of AI on operational efficiency
and safety, thus highlighting its potential to advance a sustainable circular economy.
The use of AI and Machine Learning (ML) has shown promise in automating this
process and enhancing the accuracy of the sorting, as demonstrated by several recent
studies. Lubongo et al. [9] reviewed recent advances in the integration of ML and AI with
spectroscopic methods and machine vision for the classification of plastic waste. Their
investigation highlights advances in automated technologies that can achieve up to 99.99%
sorting accuracy and processing speeds of 10 tons per hour [9]. Although challenges persist
in the classification of complex films and plastics, AI-driven systems have significantly
improved accuracy and throughput, indicating a positive trajectory for advancing recycling
technologies. Cheng et al. [10] investigated an AI-powered sorting robot that utilizes image
recognition and advanced grippers to classify containers such as PET bottles and colored
glass to tackle Japan’s recycling challenges. Feng et al. [11] designed an intelligent waste
bin powered by GECM-EfficientNet, achieving above 94% accuracy in waste classification
on two datasets.
Machine vision systems, particularly those that use deep learning algorithms, can
classify objects based on visual data, enabling more accurate real-time sorting [12].
Strollo et al. [13] developed a system for the automatic classification of waste materi-
als using computer vision and ML techniques. This system utilizes both Near-Infrared
(NIR) and RGB cameras to analyze and identify incoming objects, achieving substantial
Appl. Sci. 2025, 15, 1550 5 of 26
classification accuracy. Similarly, Jacobsen et al. [14] explored the application of AI in public
waste sorting through an automated bin system, the “Waste Wizard”, deployed in public
spaces such as zoos and retail stores. Their findings indicate that public participation,
knowledge, and attitudes play an important role in waste sorting behavior, highlighting
how user interaction with AI-powered systems can promote greater recycling efficacy in
public contexts.
In large-scale applications, Wilts et al. [8] investigated the use of robotic AI systems
in municipal waste sorting facilities to replace or supplement manual sorting. Their re-
search demonstrated improvements in recycling rates and the purity of sorted materials,
which highlights the potential scalability of robotic AI solutions in industrial environ-
ments. This study serves as a valuable reference for the scalability and operational effi-
ciency that our proposed prototype intends to achieve. Further advancing this domain,
Mohammed et al. [15] presented an AI-based digital model customized for smart cities,
designed to automatically classify waste according to the principles of the circular econ-
omy. Using an artificial neural network (ANN) and feature fusion techniques, their model
achieved an accuracy of 91.7% in various waste categories, showcasing the role of AI in
the solution to urban waste challenges and meeting recycling requirements. These studies
collectively establish a foundation for the integration of AI in automated waste sorting,
supporting the design and scalability of sustainable solutions in diverse settings.
Convolutional Neural Networks (CNNs) have demonstrated a significant level of
effectiveness in the domain of image classification tasks. This achievement is largely due to
their ability to learn intricate patterns and features from extensive datasets, enabling them
to generalize and accurately identify new and previously unseen objects. These systems,
such as RecycleNet, using Mask R-CNN algorithms [16], achieved a 90.2% accuracy in
the classification of recyclable materials, including aluminum, paper, PET bottles, and
nylon [17]. This resulted in improved recycling efficiency by automating the processes for
classifying and retrieving materials. These systems have been successfully implemented in
a range of applications, including industrial automation and recycling tasks.
Various research works in the literature have highlighted the application of CNNs
for waste management systems. Melinte et al. [18] presented enhanced CNN-based object
detectors for municipal waste detection on a single autonomous robotic system, achieving
an accuracy of 97.63% with SSD and 95.76% with Faster R-CNN. Wu et al. [19] introduced
a hybrid ensemble CNN model boosted by Multi-Scale Local Binary Pattern (MLBP) and
Simulated Annealing (SA) for optimization, significantly enhancing accuracy to 99.01% on
TrashNet and 99.41% on HGCD datasets. Chauhan et al. [20] developed an enhanced waste
management system leveraging a CNN-based image classifier to automate waste sorting,
resulting in notable efficiency and cost reductions. Their model outperformed traditional
models such as AlexNet, VGG16, and ResNet34, with a classification accuracy of 94.53%.
Abuhejleh et al. [21] employed a CNN-based method with transfer learning to categorize
recyclable materials, achieving 95% accuracy for categories such as plastic, paper, and metal.
Huang et al. [22] introduced a vision transformer-based method that enhances waste
classification by utilizing self-attention mechanisms, achieving a top accuracy of 96.98% on
the TrashNet dataset and outperforming traditional CNNs by effectively addressing global
information within images. Bhattacharya et al. [23] presented the Self-Adaptive Waste
Management System (SAWMS), which employs CNNs and conveyor systems for real-time
sorting. It is capable of adapting and self-training to maintain accuracy with changing
waste compositions.
Abdu et al. [24] conducted a thorough review of deep learning techniques applied in
waste management, pointing out current research gaps and future potential. The study
highlighted the limited availability of comprehensive analyses and application-specific
Appl. Sci. 2025, 15, 1550 6 of 26
datasets for image classification and object detection in waste tasks. It evaluated more than
twenty datasets and critiqued existing methodologies while suggesting future research
paths [24]. However, the YOLO (You Only Look Once) deep learning model has emerged
as one of the most widely adopted object detection frameworks, known for its exceptional
speed and accuracy in real-time applications [25]. Ultralytics YOLOv8 [26] represents a
significant advance, mainly due to extensive optimizations that have markedly improved
the model’s proficiency in object categorization [26]. These refinements make it particularly
suitable for scenarios that require rapid and accurate decision-making, as commonly re-
quired in automated sorting systems. The latest iteration, YOLOv11, further advances these
capabilities, providing state-of-the-art performance in a variety of applications, including
detection, segmentation, pose estimation, and tracking. In waste sorting systems, the YOLO
architecture can be trained to classify different categories of recyclables, such as plastic
bottles, metal cans, and paper, even in situations where these objects are combined or
displayed in chaotic or unstructured settings, enabling more efficient recycling processes.
Several studies used the YOLO deep learning model for waste classification systems.
Choi et al. [27] enhanced plastic waste sorting by integrating image sensors with the YOLO
deep learning model, achieving more than 91.7% accuracy in distinguishing plastics such as
PET from PET-G. Wen et al. [28] introduced a sorting approach that combines the YOLOX
detection model with the DeepSORT tracking algorithm for rapid real-time plastic waste
sorting. This approach leverages visual detection to increase sorting accuracy and efficiency,
demonstrating considerable flexibility and potential in complex sorting scenarios.
Several studies have demonstrated the potential of combining AI vision systems
with PLCs to improve waste sorting efficiency. Nandre et al. [29] explored a vision-based
recycling system controlled by a PLC, emphasizing the benefits of AI in the accurate
detection of recyclable materials. Similarly, Kokoulin et al. [30] explored modern sorting
techniques that combine AI with conventional sorting technologies. The aim was to
improve the efficiency and effectiveness of MSW management. These studies suggest
that AI and ML algorithms can significantly improve waste classification and segregation
compared to traditional sensor-based systems.
Multiple research efforts have explored the application of AI, spectrometry, and com-
puter vision methods to enhance sorting processes. For example, Zulfiqar et al. [31]
introduced a PLC-based object sorting system that uses a combination of inductive, laser,
and electromagnetic sensors to sort materials. However, while these methods are promising,
they are still limited in handling unsegregated waste and complex material compositions.
On the other hand, AI-powered vision systems can be trained to differentiate objects even
in mixed, cluttered environments, making them more effective in real-world scenarios [32].
Investigations of AI-driven sorting solutions additionally emphasize the importance
of scalability. Given that waste sorting facilities handle significant amounts of materials,
achieving efficient scalability is crucial to sustaining high levels of operational throughput
over time. Many studies, such as [33–35], have shown that AI models such as YOLO
can process multiple objects in real time, offering scalability for industrial applications.
This scalability, coupled with the ability to retrain models on new material classes, makes
AI-driven sorting systems highly adaptable to future recycling needs.
This paper presents an approach to designing a conveyor-based automated recycling
sorting machine that integrates a PLC with an AI-driven vision system. Using the YOLOv8
deep learning model, the system can classify and sort paper, plastic, and metal cans with
high precision. The AI-driven vision system facilitates instantaneous categorization of
materials, whereas the PLC governs the conveyor mechanism and associated actuators.
These components work collaboratively to achieve segregation of materials, guided by data
input from the AI system. The proposed system has been subjected to an extensive series
Appl. Sci. 2025, 15, 1550 7 of 26
of evaluations carried out within a carefully monitored environment. This environment
ensures that all variables are precisely controlled and continuously monitored. The results
of these evaluations indicate that the system exhibits excellent accuracy and is capable of
scaling efficiently to accommodate larger-scale waste sorting operations. Hence, this study
proposes the integration of AI and PLC to construct a scalable and effective prototype that
is designed to improve waste sorting efficiency. The ultimate objective is to contribute
significantly to the advancement of sustainable recycling methodologies.
2. System
2.1. Research Design and Methodology
This section outlines the detailed research design and methodology adopted in the
development of a PLC-controlled intelligent conveyor system enhanced with AI-driven vi-
sion to improve efficiency in waste sorting. The methodologies implemented, especially the
seamless integration of AI-powered vision systems with PLC automation, were systemati-
cally selected to improve precision and optimize the efficiency of the waste sorting process.
The system design employs the YOLOv8 deep learning model, known for its superior
accuracy and real-time processing capabilities, ensuring the proficient identification and
categorization of various recyclable materials under dynamic operational conditions. De-
tailed descriptions of the experimental configurations, including the conveyor framework,
camera configurations, and PLC programming, are discussed in the following sections to
illustrate the practical implementation of the theoretical design within a controlled setting.
The system was validated through an array of tests designed to evaluate its precision,
reliability, and scalability in processing various types of material. To verify the robustness
of the research design, extensive testing and calibration of AI algorithms and mechanical
adjustments were essential for optimized performance. The system components were
precisely calibrated, including the selection of detection thresholds and response timing,
which are crucial for ensuring precise sorting results and rapid response. The direct
correlation between the research design and the system performance metrics presented
in the results section underscores the potential of the design to achieve the designated
objectives while supporting the conclusions of the study.
2.2. Components
The system is composed of several fundamental components, including a camera,
conveyor, actuator, PLC, and motor controller. Each of these elements plays an essential role
in the effective execution of the object detection and sorting process. The camera chosen
for this system implementation is an ONN 1440p desktop webcam, which is equipped
with autofocus capabilities and a ring light. The selection of this camera was based on
its superior resolution quality combined with its cost-effectiveness, making it an ideal
choice to consistently obtain clear images of items as they move along the conveyor belt.
The autofocus capability is crucial for the system’s ability to manage changes in the dimen-
sions of objects without the need for manual intervention. This feature is indispensable
in maintaining a uniform level of detection precision in various scenarios. The camera is
installed at a right angle to the conveyor belt, facilitating the acquisition of distinct images
of the items as they temporarily halt for examination. These images are subsequently sent
to a Personal Computer (PC) through a USB interface, where they undergo analysis by a
part-detection algorithm to recognize the items.
The conveyor system, illustrated in Figure 4, enables the transport of objects within the
system and is driven by a brushless DC motor, also shown in Figure 4, which is controlled
by a motor driver that receives signals from the PLC. The velocity and distance traversed
by the objects on the conveyor are controlled by the duration for which the motor receives
Appl. Sci. 2025, 15, 1550 8 of 26
power. This design facilitates exact regulation of the conveyor’s motion, guaranteeing that
the objects halt momentarily in front of the camera for image capture before proceeding
along the conveyor belt. The conveyor system is designed to handle the weight and
dimensions of recycled materials, ensuring smooth operation.
Figure 4. (Left) Conveyor system for object transport. (Right) Oriental Motor BLM460S-GFV2 for
precise motion control.
The sorting mechanism employs a stepper motor as the actuator, which guides a sepa-
rator to direct cans into one container and bottles into another. The precise motion control
of the stepper motor ensures that objects are sorted correctly based on their identified types.
This sorting process is synchronized with the data received from the image processing
system, ensuring real-time adjustments and smooth operation without interruptions.
The system’s control architecture is centered around an Allen-Bradley PLC with
input/output (I/O) modules and an Ethernet port, which is responsible not only for con-
trolling the conveyor and actuator but also for receiving input from the PC. The PLC
functions through ladder logic, enabling real-time control and system monitoring. Addi-
tionally, it supports remote operation via a Human–Machine Interface (HMI), simplifying
the process of monitoring system performance and implementing necessary adjustments.
The flexibility of the PLC allows for future expansions or modifications to the system
without significant hardware changes.
The 60-watt brushless motor driver, shown in Figure 5, simplifies the operation of the
conveyor by directly processing input signals (0–5 V) from the PLC. This design eliminates
the need for extra control signal conditioning, thereby reducing the complexity of the
system and easing the wiring process. The motor controller provides improved scalability
and adaptability, allowing the integration of additional components or modifications to
system parameters while maintaining reliable performance and reducing energy usage.
All of the previously mentioned components are vital in enhancing the system’s
efficiency, ensuring that it operates effectively and reliably. Together, they enable precise,
real-time object identification and classification. Their combined efforts ensure that the
system functions seamlessly and autonomously, enhancing the overall process for optimal
efficiency and productivity.
Appl. Sci. 2025, 15, 1550 9 of 26
Figure 5. BMUD60-A2 brushless DC motor driver for precise control and efficient operation of the
conveyor system.
2.3. Architecture
In the previous section, the different elements that constitute the system architecture
were outlined. This architecture incorporates several interconnected elements that work
together to efficiently detect and sort objects automatically. The system is designed to
include a conveyor belt, a camera, a stepper motor, a PLC, an HMI, and a PC. Each
component has a unique function within the system, facilitating the exchange of data and
control signals to ensure efficient real-time processing and operation.
In this prototype, a motor relay controls the conveyor belt, with its direction man-
aged via two input ports. These ports determine whether the motor rotates clockwise or
anticlockwise, based on the received signals. This control is essential for the conveyor
to function properly in moving items through the detection and sorting process. A com-
prehensive description of the system components and their connections is illustrated in
Figure 6.
HMI
Figure 6. Schematic of the automated waste sorting system: conveyor belt with a camera, stepper
motor, PLC, PC, HMI, and bins for object sorting and categorization.
A camera is located at the start of the conveyor belt, where various parts, such as
plastic, metal, or paper, are placed. This camera plays a crucial role in taking pictures of
items as they enter the system. The captured images are transmitted to the PC for immediate
processing and examination. Positioning the camera at the conveyor entry point guarantees
Appl. Sci. 2025, 15, 1550 10 of 26
that every item is captured immediately, allowing prompt object recognition. As illustrated
in Figure 7, the PC and the PLC are connected by an Ethernet cable, establishing a reliable
communication network. This Ethernet connection allows the PC to interact with the PLC
using tag addresses to perform data writing and reading operations on the PLC. The PC
processes the camera data, identifies the objects, and sends control commands to the PLC.
These commands instruct the PLC on how to manage the sorting process. The use of
Ethernet provides high-speed communication between the two components, ensuring that
real-time control is maintained throughout the process.
The PLC is integral to the control architecture of the system. It includes (I/O) ports that
facilitate connections to the motor control system, the start and stop buttons, and the HMI.
The start and stop buttons provide manual control over the conveyor system, allowing
operators to start or stop the process as needed. The motor controller interfaced with the
PLC allows the system to regulate the conveyor’s movement with precision, ensuring
that items are delivered accurately to their specified containers. The HMI serves as a user
interface that allows operators to monitor system status, view process data, and make
real-time adjustments as needed. The integration of HMI with PLC provides operators
with valuable insight into system performance and enables efficient process control.
Once the camera captures the object and the computer algorithm recognizes it, the in-
formation is sent via the Ethernet connection to the PLC. The PLC then directs the conveyor
belt to transport the items to their designated sorting positions. Once the part reaches
the end of the belt, the PLC activates a stepper motor, which is responsible for pushing
the object into the appropriate bin. The system employs color-coded bins for different
materials (e.g., plastic, metal, and paper), allowing for efficient sorting based on the de-
tected object type. The stepper motor provides precise control, ensuring that each object
is accurately directed to its designated bin. The PLC facilitates adaptable control and
system scalability, and the Ethernet link between the PC and the PLC guarantees swift
and stable data transmission, while the HMI improves user interaction with the system,
offering real-time feedback and control capabilities. By integrating various components
and their associated functionalities, this architectural design establishes an efficient and
automated object detection and sorting mechanism, effectively illustrating a viable strategy
for automated waste sorting through the integration of multiple technologies to achieve a
comprehensive and efficient process.
Figure 7. System connection diagram: communication and control links integrating the camera, PC,
PLC, motor, and sensors.
Appl. Sci. 2025, 15, 1550 11 of 26
2.4. Operation Workflow
The automated classification system functions through a sequential procedure that
relies on camera input data and is managed by a PLC. Figure 8 illustrates a flowchart
detailing the main stages of this procedure, designed to detect, identify, and categorize
different waste materials in suitable containers.
Version February 1, 2025 submitted to Appl. Sci. 11 of 23
Start
Move to Camera
Part Present?
Identify Part Camera
Part is Can?
Part is Bottle?
Part is Paper?
Move to Can Bin
Move to Bottle Bin
Move to Paper Bin
Move Actuator
Move to Reject
Yes
No
Yes
No
Yes
No
Yes
No
Figure 8. Flowchart of part classification and sorting processes using camera-based recognition.
Subsequently, the system directs these unidentifiable items to a dedicated reject container. 376
This procedural framework improves the efficiency of the automation process, with the 377
PLC managing the logical operations and the stepper motor executing the sorting tasks 378
based on object classification. 379
3. Deep Learning Model & Programming Framework 380
This section describes the deep learning model utilized to train the classification 381
system. It provides a detailed account of both the image classification model and the logic 382
control system integrated into the PLC, as well as its application in the physical prototype. 383
The selection of an appropriate platform for the AI-based vision system was a critical 384
challenge. In this project, the Ultralytics YOLOv8 deep learning neural network package 385
[? ] was used due to its ease of implementation, extensive support packages, and robust 386
performance in object detection tasks. The choice of YOLOv8 allows for efficient processing 387
and accurate detection, making it suitable for real-time applications. 388
The neural network was implemented using the Python programming language, 389
selected for its modular structure and ease of use. Although Python may not be the most 390
high-performance language in terms of computational speed, it provides considerable 391
flexibility and demonstrates sufficient efficiency for tasks involving single-frame image 392
capture and object detection. This makes it suitable for proof-of-principle applications. 393
Figure 8. Flowchart of part classification and sorting processes using camera-based recognition.
The process begins with the system initialization, indicated by the Start block. Once
started, the system moves to the camera position, where it waits to detect the presence of
a part. If no part is detected, the system is retriggered to continue monitoring the arrival
of a new item on the conveyor. When an object is detected, the camera captures an image
and the AI algorithm within the PC recognizes the object by analyzing the image data.
The PLC then assesses the object and categorizes it based on predefined criteria: metal can,
plastic bottle, paper, or reject. Based on the identified object, the system follows one of the
following steps:
1. Metal Can Identification: If the detected object is identified as a metal can, the PLC
sends a command to the stepper motor, which then positions the object correctly and
pushes it off the conveyor into the designated can bin.
2. Plastic Bottle Identification: If the detected object is identified as a plastic bottle,
the PLC instructs the stepper motor to push the object into the bottle bin.
3. Paper Identification: If the part is identified as paper, the PLC sends a command to
the stepper motor, enabling the object to move toward the paper bin located at the
end of the conveyor.
In each scenario, the stepper motor facilitates the precise mechanical relocation of de-
tected items to their designated containers, ensuring accurate categorization. Components
that do not conform to the predefined categories of metal, plastic, or paper are classified
as rejects. Subsequently, the system directs these unidentifiable items to a dedicated reject
Appl. Sci. 2025, 15, 1550 12 of 26
container. This procedural framework improves the efficiency of the automation process,
with the PLC managing the logical operations and the stepper motor executing the sorting
tasks based on object classification.
3. Deep Learning Model and Programming Framework
This section describes the deep learning model utilized to train the classification
system. It provides a detailed account of both the image classification model and the
logic control system integrated into the PLC, as well as its application in the physical
prototype. The selection of an appropriate platform for the AI-based vision system was
a critical challenge. In this project, the Ultralytics YOLOv8 deep learning neural network
package [26] was used due to its ease of implementation, extensive support packages,
and robust performance in object detection tasks. The choice of YOLOv8 allows for efficient
processing and accurate detection, making it suitable for real-time applications.
The neural network was implemented using the Python programming language,
selected for its modular structure and ease of use. Although Python may not be the most
high-performance language in terms of computational speed, it provides considerable
flexibility and demonstrates sufficient efficiency for tasks involving single-frame image
capture and object detection. This makes it suitable for proof-of-principle applications.
The training dataset consisted of approximately 2000 captured images, balanced across
three classes: cans, plastic bottles, and paper. These images were collected under diverse
environmental conditions, with variations in geometry, orientation, size, and lighting to
reflect the diversity of real-world waste sorting scenarios and ensure robust and balanced
model performance. These images were used to train the YOLOv8 model, allowing accurate
classification of objects within the recycling stream. Figure 9 provides examples of the batch
process used for image identification and classification during model training. A batch
refers to a subset of training images that are processed through the model, where tasks
such as edge detection and bounding-box regression are performed. In Figure 9a, the labels
designated by the human annotator (trainer) are presented, whereas Figure 9b illustrates
the predicted labels of the model for various objects, including examples of paper. This
process enables the model to learn iteratively from smaller data segments, enhancing its
accuracy in object detection and classification.
(a) (b)
Figure 9. Training samples used for object detection. (a) Labeled images annotated by the human
trainer, featuring objects such as plastic bottles and crumpled paper in varied conditions. (b) Predicted
labels generated by the model.
Appl. Sci. 2025, 15, 1550 13 of 26
3.1. Code and Integration with PLC
The programming environment used for this project is Python 3.8.1, selected for its
stability and compatibility with the Ultralytics YOLOv8 software package. Additionally,
the Pylogix library was employed to interface with the Allen-Bradley PLC, ensuring efficient
communication between the image processing model and the control system. Detailed
Python codes are available in Appendix A.
In Figure A1, key libraries such as Ultralytics and cv2 are imported to handle video
capture and processing. The first code segment imports the training images, configuring
parameters like 50 epochs, an image dimension of 736 × 736 pixels, and a batch size
of 16. These settings were adjusted in consideration of the hardware limitations and
computational demands of the system. The image capture process is initiated with the
command cap.read(), after which the model performs real-time inference using the function
model(image, stream=True) to detect and classify objects within the captured frames.
Figure A2 displays the second segment of the source code, highlighting its incorpo-
ration into the Pylogix library. This library substantially facilitates the communication
processes with the PLC. Within this code snippet, essential tasks are accomplished, begin-
ning with the specification of the PLC’s Internet Protocol (IP) address and encompassing
critical operations, such as retrieving tag inventories using the getTagList function, as well
as executing commands to write data to tags and read their current values. These integral
capabilities collectively enable seamless interaction with the PLC, facilitating the execution
of control actions derived from processed data. Consequently, the system achieves the
proficiency required for real-time automation and sorting procedures.
3.2. PLC Programming and Control Logic
Programming of the PLC is performed using ladder logic within the RS Logix 5000
v20 software. Several software functions are used, including examining input ports, mov-
ing commands, timers and counters, internal registers, subroutines, and output ports.
The program includes test switches and physical button ports for maintenance and testing
purposes. The system can be reprogrammed to adjust the end positions by modifying the
timer values. Additionally, new parts can be seamlessly integrated by duplicating rungs in
the subroutine without requiring significant changes to the main routine. Bypasses for di-
agnostics and testing are also included, allowing the process to be controlled through HMI
buttons or physical switches. The primary function of the program is structured around a
main routine, illustrated in Figure 10, which performs the following checks and processes:
1. The start button is pressed;
2. The conveyor moves forward for a predefined duration (T1);
3. The camera detects the presence and type of part;
4. The program triggers the appropriate subroutine;
5. The subroutine loads a preset timer according to the detected part;
6. The motor operates once activated, triggered by the T2 timer;
7. The part is pushed by the actuator;
8. All timers are reset, and the program repeats the cycle;
9. If no parts are detected, the program is stopped until the operator resets or restarts it.
The subroutine depicted in Figure 11 plays a crucial role in acquiring the input data
from the Ethernet network. Subsequently, it loads one of three predetermined values
onto a timer. These preset values dictate the distance the conveyor travels before coming
to a halt. To maintain uninterrupted operation as components are introduced into the
system, latching logic is used. This approach simultaneously isolates output connected to
previously sorted components, thus maintaining precise sorting into the designated bins.
Appl. Sci. 2025, 15, 1550 14 of 26
MainRoutine - Ladder Diagram
GS:MainTask:MainProgram 4/21/2024 2:52:15 PM
Total number of rungs in routine: 12 C:RSLogix 5000ProjectsGS.ACD
0
PB1_START
<Local:1:I.Data.0>
HMISTART
OK
PB2_STOP
<Local:1:I.Data.1>
/
HMISTOP
L
OK
1
OK
/
LatchTime
T1.EN
/
T1.DN
EN
DN
Timer On Delay
Timer T1
Preset 5000
Accum 0
TON
2
T1.EN
/
T1.DN
/
T3.DN T3.EN
MOTOR_FWD
<Local:2:O.Data.7>
Local:2:O.Data.15
3
T1.DN CAPIN
4
START
SS1
<Local:1:I.Data.4>
L
Exec
5
Exec
/
T4.EN
Jump To Subroutine
Routine Name excecute
JSR
6
T4.EN
/
T4.DN Local:2:O.Data.3
7
T3.DN
L
LatchTime
8
LatchTime
EN
DN
Timer On Delay
Timer T4
Preset 15000
Accum 0
TON
CAN
/
PAPER Local:2:O.Data.1 Local:2:O.Data.11
/
CAN
/
PAPER Local:2:O.Data.2 Local:2:O.Data.10
9 /
PB2_STOP
<Local:1:I.Data.1>
T3.DN
U
Exec
RES
T1
RES
T3
RSLogix 5000
10
T4.DN
U
CAN
U
PAPER
U
LatchTime
RES
T4
11
HMISTOP
/
PB2_STOP
<Local:1:I.Data.1>
U
OK
(End)
Figure 10. Ladder diagram of the main routine illustrating conveyor movement, part detection,
and actuator control within the automated sorting system.
Paper-based items are allowed to pass directly beyond the actuator and proceed into a
designated container specifically designed for paper, located at the end of the conveyor belt
system. On the other hand, plastic and metallic items are programmed to pause directly
underneath the actuator. Upon achieving the correct placement, the actuator is initiated.
The stepper motor responds by rotating clockwise or counterclockwise, depending on
whether a metal can or plastic bottle has been identified, thus guiding the item into the
appropriate collection bin located on the left or right side of the conveyor system. After the
sorting mechanism has completed its operation, all associated timers are set back to their
initial state, and any latched outputs are subsequently unlatched.
The program is designed with built-in test switches and physical button ports to
facilitate maintenance and testing operations. The system can be easily reprogrammed to
modify the end positions by adjusting the timer values. Additionally, new parts can be
seamlessly integrated by duplicating a rung in the subroutine, without requiring signifi-
cant modifications to the main routine. This flexibility allows for efficient scalability and
adaptability in the sorting process.
The program also includes bypasses for diagnostic purposes and algorithm testing,
controlled via switches and buttons. As shown in Figure 12a, the process can be operated
through the main HMI screen or physical buttons, depending on the user’s preference.
The configuration settings of the HMI panel, shown in Figure 12b, allow easy control
Appl. Sci. 2025, 15, 1550 15 of 26
and adjustment of the system parameters, ensuring smooth operation and maintenance.
The counters integrated into the input rungs track the number of processed parts and
display this information on numeric displays, as configured in the HMI panel settings
shown in Figure 12a.
excecute - Ladder Diagram
GS:MainTask:MainProgram 4/21/2024 2:52:27 PM
Total number of rungs in routine: 5 C:RSLogix 5000ProjectsGS.ACD
RSLogix 5000
0
paper_out
SS2
<Local:1:I.Data.5>
L
PAPER
Move
Source 22000
Dest T3.PRE
17500
MOV
CU
DN
Count Up
Counter PaperCounter
Preset 9999
Accum 14
CTU
1
can_out
SS3
<Local:1:I.Data.6>
L
CAN
Move
Source 17500
Dest T3.PRE
17500
MOV
CU
DN
Count Up
Counter CanCounter
Preset 9999
Accum 6
CTU
2
plastic_out
SS4
<Local:1:I.Data.7>
Move
Source 17500
Dest T3.PRE
17500
MOV
CU
DN
Count Up
Counter PlasticCounter
Preset 9999
Accum 7
CTU
3 /
SS2
<Local:1:I.Data.5>
/
SS3
<Local:1:I.Data.6>
/
SS4
<Local:1:I.Data.7>
/
T3.EN
Move
Source 2000
Dest T3.PRE
17500
MOV
4 EN
DN
Timer On Delay
Timer T3
Preset 17500
Accum 0
TON
(End)
Figure 11. Ladder diagram of the subroutine handling conveyor timing and sorting actions specific
to identified parts, including paper, cans, and plastic.
(a) (b)
Figure 12. (a) Main HMI interface displaying start/stop controls and material counters for paper,
plastic, and cans. (b) Configuration panel for system settings, including start roll and actuator
timing adjustments.
3.3. System Deployment
The physical system was developed to validate the functionality of the classification
system, which consists of a conveyor belt powered by an Oriental Motor BMUD60-A2
Appl. Sci. 2025, 15, 1550 16 of 26
brushless motor driver manufactured by Oriental Motor Co., Ltd., located in Tokyo, Japan,
which facilitates the movement of recycling materials along the conveyor. At the end of
the conveyor, a stepper motor is integrated with a flipper mechanism designed to sort
bottles and cans. As illustrated in Figure 13, the system operates by advancing items
along the conveyor until they are detected by the vision system using a camera. If the
system identifies paper, the item proceeds to the end of the conveyor without intervention.
In contrast, if a can is detected, the flipper mechanism directs it to the left bin, and if a
plastic bottle is identified, it is sorted into the right bin. The flipper mechanism is controlled
by a stepper motor managed by an Arduino Uno, which synchronizes the sorting process
based on input signals received from the PLC.
Figure 13. Experimental setup of the conveyor sorter, illustrating the conveyor belt, sorting mecha-
nism, and bins designated for the recycling of cans, plastic bottles, and paper.
This setup provides a simplified yet effective platform for validating the performance
of the vision-based classification system, ensuring reliable operation without introducing
unnecessary complexity into the physical design. The system is currently configured to
classify three types of waste, i.e., paper, cans, and bottles, while offering scalability to
include additional recycling categories by incorporating more conveyors and actuators as
needed.
4. Performance and Evaluation Results
The prototype was evaluated using three categories of trained objects: plastic bottles,
metal cans, and paper. The control system, which integrates the YOLOv8 deep learning
algorithm for object detection, a PLC for control, and a stepper motor with a conveyor for
actuation, demonstrated consistent and reliable performance. Using the YOLOv8 algorithm,
the system achieved precise and rapid detection and classification of objects with minimal
processing delays, ensuring uninterrupted and efficient sorting operations. Objects were
sorted with high accuracy and minimal false readings, validating the prototype’s ability
to handle real-time waste sorting. The system also maintained its effectiveness during
continuous operation, demonstrating its ability to sustain high-throughput performance.
Appl. Sci. 2025, 15, 1550 17 of 26
The results highlight the functionality of the prototype and its scalability for faster
sorting operations and adaptability to additional object categories, such as organic waste.
The integration of the YOLOv8 deep learning algorithm with the PLC was pivotal in
achieving these results. While the YOLOv8 algorithm ensured accurate and efficient object
classification, the PLC seamlessly managed the conveyor, motor, and actuator, enabling
smooth and reliable sorting based on classification results. This real-time coordination
directed each identified object to its designated category, plastic, metal, or paper, with-
out disrupting conveyor flow. The continuous sorting capability underscores the potential
of the system for industrial-scale implementation.
The system architecture also supports scalability, allowing for the classification of new
material categories by retraining the YOLOv8 algorithm and making minor adjustments to
the PLC programming. This adaptability is critical to address the evolving requirements of
waste management facilities as recycling standards and regulations continue to advance.
The performance of the YOLOv8 deep learning algorithm was rigorously evaluated
using several metrics to assess its classification accuracy and operational robustness. These
metrics included the confusion matrix, which provided insight into prediction errors,
and precision and recall, which quantified the quality of positive class predictions. Further-
more, the Mean Average Precision (mAP) was calculated to evaluate the overall predictive
performance at varying thresholds. Training and validation losses, including box loss,
classification loss, and Distribution Focal Loss (DFL), were systematically analyzed to
optimize the algorithm’s performance. These metrics demonstrated the high precision,
scalability, and effectiveness of the YOLOv8 algorithm in real-time waste sorting scenarios.
The subsequent sections provide a detailed discussion and analysis of these findings.
4.1. Confusion Matrix
Figure 14 presents the confusion matrix used to evaluate the classification performance
of the model. In this matrix, each row corresponds to the true classes, while each column
corresponds to the classes predicted by the model. The scores, ranging from 0 to 1, provide
a quantitative measure of accuracy. The model achieved an accuracy of 88% for metal cans,
75% for paper, and 91% for plastic bottles, demonstrating strong performance, particularly
with plastics. Although the results align with the project objectives, further improvements
are expected by introducing a black background on the conveyor belt to reduce noise and
minimize errors caused by random objects, improving overall classification accuracy.
Cans Paper Plastic bottle Background
True Label
Cans
Paper
Plastic
bottle
Background
Predicted
Label
0.88 0.00 0.00 0.08
0.04 0.75 0.09 0.90
0.01 0.00 0.91 0.04
0.07 0.25 0.00 0.00
Confusion Matrix
0.0
0.2
0.4
0.6
0.8
Figure 14. Confusion matrix displaying the classification results for cans, paper, plastic bottles,
and background, with camera-based recognition.
Appl. Sci. 2025, 15, 1550 18 of 26
4.2. Training and Validation Box Loss
The loss in the training and validation box measures error associated with the predicted
bounding boxes compared to the ground-truth bounding boxes (i.e., human-annotated
correct boxes). During labeling, the user annotates the object by drawing a square with
the corresponding label, while the ML model generates bounding boxes based on its
understanding of the image. These metrics are crucial during the training and validation
phases of ML models, as they quantify how well the predicted bounding boxes align with
the ground-truth bounding boxes for objects in an image, particularly in tasks aimed at
accurately locating and classifying objects.
Figure 15 reflects the overlapping errors between the predicted and actual bounding
boxes. The box loss demonstrates an ideal linear decrease, indicating improved accuracy in
identifying bounded boxes on objects. A box loss of 0.6 is generally considered acceptable
for multiclass detection models, highlighting the model’s effective performance in locating
objects within images.
Training: Box Loss
Validation: Box Loss Validation: Class Loss
Training: Class Loss
Training: Distribution
Focal Loss (DFL)
Validation: Distribution
Focal Loss (DFL)
Precision Recall
Mean Average Precision
(mAP50 Score)
Mean Average Precision
(mAP50-95 Score)
0 20 40 0 20 40 0 20 40 0 20 40 0 20 40
0 20 40 0 20 40 0 20 40 0 20 40 0 20 40
1.8
1.6
1.4
1.2
1.0
4
3
2
1
2.75
2.50
2.25
2.00
1.75
1.50
0.8
0.6
0.4
0.2
0.6
0.4
0.2
1.1
1.0
0.9
0.8
0.7
0.6
3.0
2.5
2.0
1.5
1.0
0.5
1.6
1.5
1.4
1.3
1.2
0.8
0.6
0.4
0.2
0.8
0.6
0.4
0.2
Figure 15. Training and validation metrics over 50 epochs, illustrating the progression of box loss,
classification loss, DFL, precision, recall, and mAP metrics for model evaluation and optimization.
4.3. Training and Validation Class Loss
The training and validation class loss measures the accuracy of the model in assigning
objects to their correct categories during the training and validation phases. In object detec-
tion and classification tasks like YOLOv8, class loss is a crucial metric that quantifies the
error between predicted class probabilities and ground-truth class labels within bounding
boxes, evaluating the model’s ability to correctly classify detected objects.
Figure 15 shows that the class loss follows a logarithmic decline, stabilizing around 0.5.
The relatively high class loss is probably due to background confusion in the paper class.
However, this loss is expected to decrease with further training and additional epochs,
as the YOLOv8 model continues to refine its classification accuracy.
4.4. Training and Validation DFL Loss
The DFL is a specialized loss function used in object detection models to improve the
regression of the bounding box by refining the distribution of predicted values. It measures
the error associated with key point detection or regression during training and validation,
with a focus on improving the model’s ability to predict accurate bounding boxes.
DFL is particularly effective for difficult-to-classify examples and incorporates the com-
plete Intersection over Union (CIoU) metric to assess the overlap and alignment between
predicted and ground-truth bounding boxes. A DFL loss of 1.2, as shown in Figure 15, is con-
Appl. Sci. 2025, 15, 1550 19 of 26
sidered acceptable, especially for complex object shapes like flat, crumpled, or book-bound
paper. This loss function ensures robust and precise localization in challenging scenarios.
4.5. Precision
Precision quantifies the proportion of true positive predictions (i.e., correct detections)
relative to the total positive predictions (including both true positives and false positives).
Higher precision indicates that the algorithm is yielding more relevant and accurate results.
As illustrated in Figure 15, the precision metric reaches a maximum value of approxi-
mately 90%, indicating a high level of precision in detecting objects within the dataset.
Increasing the number of training epochs is unlikely to improve this metric further and
could potentially lead to overfitting, where the model becomes overly specialized to the
training data, limiting its ability to generalize to new unseen images.
4.6. Recall
Recall is a performance metric that evaluates how effectively a model identifies true
positives (i.e., correct detections) out of all actual ground-truth objects in a dataset. It
measures the model’s ability to detect all relevant objects and avoid false negatives (i.e.,
missed detections), quantifying the proportion of true positive predictions relative to all
ground-truth objects.
As shown in Figure 15, the recall metric reaches a maximum value of approximately
80%. However, additional training epochs are unlikely to improve this score and may lead
to overtraining, impairing the model’s ability to generalize to unseen data. This underscores
the need to balance training to optimize recall while preserving robust performance in
new datasets.
4.7. Mean Average Precision (mAP50 and mAP50-95)
The mAP score is a crucial metric to evaluate the overall performance of an object detec-
tion model, as it measures how well the model detects objects by calculating the Intersection
over Union (IoU), which assesses the overlap between the predicted and ground-truth
bounding boxes. The mAP50 metric evaluates a model’s ability to correctly detect objects
at a more lenient overlap criterion (0.5 IoU), meaning it prioritizes detecting objects but
does not necessarily require high localization precision. Figure 15 illustrates the mAP50
score, using a 50% overlap threshold, showing that the model reached approximately 86%.
This score is considered adequate for object identification, even when the boundaries are
not precisely matched. This high mAP50 score signifies the robust object identification
capability of the model, despite the imperfect alignment of the boundaries.
The mAP50-95 metric is widely used for evaluating object detection models, providing
a more comprehensive evaluation by calculating the IoU over a broad range, specifically
from 0.5 to 0.95 in increments of 0.05. This method provides a comprehensive examination
of the model’s ability to precisely detect objects with different degrees of overlap, thus
allowing for a more rigorous evaluation of detection and localization accuracy. By assessing
multiple IoU thresholds, the metric gives a holistic view of the robustness of the model
under various detection conditions. In this case, the model achieved an mAP50-95 score
of approximately 70%, indicating strong detection performance across the evaluated IoU
thresholds. The equation for calculating the mAP is as follows:
mAP =
1
k
k
∑
i=1
APi (1)
where:
Appl. Sci. 2025, 15, 1550 20 of 26
• mAP: Mean Average Precision, a single scalar value that summarizes the model’s
detection precision across classes and IoU thresholds.
• k: The total number of IoU thresholds (or classes) over which the mAP is averaged.
For mAP50-95, k represents multiple thresholds from 0.5 to 0.95.
• APi: Average Precision for the i-th threshold, which is calculated as the area under the
Precision-Recall (PR) curve for that specific IoU threshold.
The equation computes mAP by averaging precision scores across specified IoU
thresholds, providing a comprehensive measure of object detection performance and
reflecting both high-confidence detections and adaptability to varying object overlaps.
5. Discussion
The performance of the proposed PLC-Controlled Intelligent Conveyor System with
AI-enhanced vision, while promising, necessitates a comprehensive evaluation of several
critical aspects. The discussion highlights the distinct contributions and innovations of
our system, along with its practical implications and scalability potential. This includes its
ability to significantly improve the effectiveness of waste management practices, aligned
with broader environmental sustainability goals. Following this, reflections on the system’s
performance and optimization strategies are examined, where the robustness and adapt-
ability of the system are explored in depth, with a special emphasis on its ability to adapt
to dynamic operational conditions that extend beyond controlled environments.
Subsequently, a comparative analysis with existing waste sorting technologies is
presented. This section will offer a concise comparison, underscoring how the system differ-
entiates itself in terms of operational efficiency and technological advancement compared
to current market solutions. Finally, future extensions are discussed, including potential
improvements and outlining ongoing efforts to refine and enhance the functionalities of the
system. These efforts are driven by the objectives of technological innovation and environ-
mental sustainability, to keep the system at the forefront of waste management technology.
5.1. Contribution and Research Implications
The incorporation of the YOLOv8 deep learning model with a PLC into the sorting
system marks a notable enhancement in automated waste sorting technology. The results
confirm the high precision of the system, achieving classification accuracies of 88% for metal
cans and up to 91% for plastic bottles. This shows a crucial improvement in the efficiency
of recycling operations. In addition, the system has been proven capable of maintaining
this high level of accuracy even in the presence of continuous chaotic waste streams and
improperly arranged objects, showcasing its suitability for large-scale industrial waste
management applications.
The implementation of the system has provided valuable insight into the practical
challenges of automated waste sorting. One significant challenge is handling objects
overlaps, which can lead to inaccuracies in classification. Although the system demonstrates
proficiency in sorting and classifying objects within disordered waste streams, advancing
the development of algorithms that can accurately differentiate and categorize overlapping
objects is identified as a critical focus for subsequent iterations, which is essential to address
complex classification challenges. Such refinements are crucial to improve the robustness
and efficiency of the sorting system. Future enhancements will focus on increasing the
resilience and reliability of the system in managing diverse operational conditions, ensuring
consistent performance in a wider range of waste management scenarios.
Appl. Sci. 2025, 15, 1550 21 of 26
5.2. Model Performance and Optimization
The model achieved notable performance metrics, including a box loss of 60%, a classi-
fication loss of 0.5, a DFL loss of 1.2, an mAP50 of 86%, and an mAP50-95 of 70%. During a
live test with a camera, the model demonstrated accurate object classification in various ob-
ject orientations. However, additional training could risk overfitting, reducing the ability of
the model to generalize to new data. Therefore, hyperparameter tuning becomes essential
to improve performance without the need for more epochs.
Hyperparameters, including learning rate, batch size, number of epochs, dropout rate,
and weight initialization, significantly influence the model’s performance and optimization
process. For this study, the hyperparameters were set to the default values recommended
by Ultralytics, as altering them would require extensive retraining and comparison, an in-
herently time-consuming task. The learning rate, batch size, and weight initialization
all contribute uniquely to the optimization of the model. Adjusting the learning rate is
critical to preventing premature convergence or the risk of getting trapped in local minima.
The batch size affects the stability of gradient updates, affecting the training dynamics,
while weight initialization determines the initial setup of model parameters, influencing
the efficiency of the learning process. The ability to appropriately fine-tune these settings is
crucial for achieving optimal results in real-world applications.
5.3. Comparative Analysis with Existing Systems
A comparative analysis is conducted to contextualize the performance of the devel-
oped PLC-Controlled Intelligent Conveyor System with AI-enhanced vision against similar
systems reported in the literature. This comparison is not intended to provide a thorough
review but rather to underscore the contributions of the current work within the field of au-
tomated waste sorting technologies. Key performance metrics, such as accuracy, precision,
and recall, are commonly employed to evaluate the effectiveness of these systems.
In a very recent study, Sayem et al. [36] reported a waste classification system using
deep learning that achieved an accuracy of 83.11% across 28 distinct recyclable categories
and an mAP50 of 63%, reflecting its effectiveness in object detection tasks. In another study,
Al-Mashhadani et al. [37] evaluated several deep learning models for the classification of
waste materials, including ResNet50, GoogleNet, InceptionV3, and Xception. Among these,
the ResNet50 model exhibited an accuracy of 95%, a precision of 95.4%, a recall of 95%,
and an F1 score of 94.8%, which shows its robustness in the classification of waste materials.
Feng et al. [11] developed an intelligent waste bin with automatic sorting, introducing
GECM-EfficientNet for real-time waste classification. The model achieved 94.54% accuracy
on a self-built household waste dataset and 94.23% on the TrashNet dataset.
In comparison, the system described in this study demonstrated a classification accu-
racy of up to 91% for plastic bottles, a precision of 90%, and a recall of 80%, and achieved
an mAP50 of approximately 86%. These results highlight the robust object identification
capabilities of the developed system even when the object boundaries are imperfectly
aligned. In addition, the integration of an advanced deep learning algorithm in the sys-
tem with a PLC facilitates faster processing times and excellent accuracy, highlighting
its potential contributions to the advancement of automated waste sorting technologies.
This comparison underscores the advances presented in this work, situating it within the
broader context of existing automated waste sorting systems.
5.4. Future Extensions
The modular design of the system lays a robust foundation for scalability and versatil-
ity, facilitating seamless expansion to accommodate larger and more complex operational
demands. This design supports straightforward upgrades to the AI deep learning model
Appl. Sci. 2025, 15, 1550 22 of 26
and adjustments to the PLC programming, enabling the system to meet diverse waste
management requirements. Beyond MSW, the flexibility of the system extends to the
classification and sorting of a wide range of recyclables, including organic waste, electronic
components, and specialized industrial or chemical materials. The modular design further
allows the integration of additional sorting stations customized to bio-based materials
or other specific waste streams, paving the way for a comprehensive and versatile waste
management solution capable of addressing the evolving demands of recycling industries.
To improve the performance efficiency of the system, several optimization strate-
gies are being explored. These include the incorporation of advanced machine and deep
learning algorithms to improve object recognition and sorting accuracy, particularly un-
der challenging conditions such as varying lighting or overlapping objects. Integrating
additional sensors, such as infrared or ultrasonic technologies, could provide additional
data to refine the identification of materials, improving the precision and reliability of the
sorting process.
Future extensions will emphasize increasing the system’s ability to manage over-
lapping objects, a critical challenge in automated sorting. Advanced image processing
algorithms are being investigated to improve segmentation and edge detection, facili-
tating more accurate differentiation between stacked or adjacent items. Depth-sensing
technologies, such as stereo cameras or LiDAR, are also being considered to provide spa-
tial information that can further improve the system’s capacity to handle complex waste
configurations. These enhancements aim to refine the sorting process, reduce error rates,
and improve operational efficiency in demanding and dynamic environments.
Advancements in the model training process are also a key priority, with efforts
focused on minimizing classification errors and improving generalization across diverse
operational scenarios. These improvements involve expanding the training dataset to
include a wider variety of waste types and conditions, as well as employing more rigorous
validation techniques to ensure robustness and reliability in real-world applications.
Emphasizing its scalability, adaptability, and ongoing optimization, the system is
presented as a proactive approach to addressing the issues associated with modern waste
management. These planned extensions and improvements will ensure their relevance
and effectiveness in addressing a wider spectrum of waste types and recycling needs while
simultaneously contributing to the advancement of intelligent sorting technologies.
6. Conclusions
This paper outlines the development and successful testing of an AI-driven recycling
sorter, marking a significant advancement in automated waste management, by integrating
a YOLOv8-based deep learning model with a PLC-controlled mechanical system. The de-
veloped prototype demonstrated effective classification of recyclable materials, including
plastics, metals, and paper. It achieves a precision rate of 90% and a recall rate of 80%, with a
mean Average Precision at 50% threshold (mAP50) of 86%, ensuring rapid and accurate
material segregation. This seamless integration ensures reliable real-time functionality,
suitable for broad waste management applications. In continuous use, it maintained high
precision and reliability, showing low training losses, a box loss of 0.6, and a classification
loss of 0.5, effectively handling object overlaps without overfitting.
Compared to existing solutions, such as those reviewed in the literature, our system
shows excellent performance. For example, compared to the ResNet50 model evaluated by
Al-Mashhadani et al. [37], which achieved an accuracy of 95% and an F1 score of 94.8%, our
system provides comparable accuracy but with enhanced real-time processing capabilities
essential for dynamic waste sorting environments. This represents a clear improvement in
Appl. Sci. 2025, 15, 1550 23 of 26
operational efficiency and adaptability, with our system achieving an mAP50 approximately
23% higher than similar AI models discussed in a recent study by Sayem et al. [36].
The integration of advanced AI capabilities not only mitigates the limitations of tra-
ditional sorting methods but also enhances the precision and throughput of the sorting.
Unlike traditional methods that struggle with mixed waste and require extensive human
intervention, our AI-enhanced system reduces labor costs, minimizes human error, and im-
proves overall operational efficiency. This reduction in dependence on manual labor and the
improvement of material quality are crucial as global waste volumes continue to challenge
the urban and industrial sectors.
Looking ahead, the system’s scalability and adaptability will enable compliance with
stringent environmental and regulatory requirements, adjusting efficiently to diverse waste
categories and increasingly intricate waste management contexts. This AI-driven platform
plays an instrumental role in the promotion of sustainable waste management, minimizing
environmental impacts, and serving the principle of the circular economy. It stands out
not only for its technical capabilities but also for its significant contributions to resource
conservation and environmental preservation.
In conclusion, this AI-powered sorting system not only showcases the technical ca-
pabilities and significant impacts of AI in waste sorting but also sets new standards for
sustainable waste management solutions aligned with evolving regulatory and environ-
mental standards.
Author Contributions: Conceptualization, V.R., M.S., A.N., C.K. and N.R.; methodology, N.A. and
N.R.; software, V.R., M.S., A.N. and C.K.; validation, N.A., M.R., H.E. and N.R.; formal analysis,
N.A., V.R., M.S., A.N., C.K. and N.R.; investigation, V.R., M.S., A.N. and C.K.; resources, N.R.; data
curation, V.R., M.S., A.N., C.K. and N.R.; writing—original draft preparation, N.A., V.R., M.S., A.N.,
C.K. and N.R.; writing—review and editing, N.A., M.R., H.E. and N.R.; visualization, N.A., V.R.,
M.S., A.N., C.K., M.R., H.E. and N.R.; supervision, N.A. and N.R.; project administration, N.R.;
funding acquisition, N.A. and N.R. All authors have read and agreed to the published version of
the manuscript.
Funding: This work was supported by the mechatronics laboratories at the College of Computing,
Department of Applied Computing, Michigan Technological University.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data are available upon request from the corresponding authors.
Acknowledgments: This research was conducted in the Advanced Programmable Logic Controllers
Laboratory within the Department of Applied Computing at the College of Computing, Michigan
Technological University. Previous published projects in the same educational settings include
gripper control [38,39], robot arm control and interfacing [40–42], and supervisory control and quality
assurance [43,44].
Conflicts of Interest: The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
AI Artificial Intelligence IP Internet Protocol
ANN Artificial Neural Network mAP Mean Average Precision
CIoU Complete Intersection over Union ML Machine Learning
CNN Convolutional Neural Network MSW Municipal Solid Waste
DFL Distribution Focal Loss PC Personal Computer
GDP Gross Domestic Product PLC Programmable Logic Controller
HMI Human–Machine Interface SDG Sustainable Development Goals
Appl. Sci. 2025, 15, 1550 24 of 26
IoU Intersection over Union YOLO You Only Look Once
Appendix A
ANN Artificial Neural Network mAP Mean Average Precision
CIoU Complete Intersection over Union ML Machine Learning
CNN Convolutional Neural Network MSW Municipal Solid Waste
DFL Distribution Focal Loss PC Personal Computer
GDP Gross Domestic Product PLC Programmable Logic Controller
HMI Human-Machine Interface SDG Sustainable Development Goals
IoU Intersection over Union YOLO You Only Look Once
Appendix A
from u l t r a l y t i c s import YOLO
import cv2
import cvzone
import math
import time
# # Training code .
# i f __name__ == ’ __main__ ’ :
# model = YOLO( ’ yolov8n . pt ’)
# # model = YOLO( ’COCO. yaml ’)
# model . t r a i n ( data =" config . yaml " , epochs =50 , imgsz=736 , batch =36)
# # model . t r a i n ( data =" config . yaml " , epochs =50 , imgsz =1024 , batch =16)
cap = cv2 . VideoCapture ( 0 ) # For Webcam
cap . set (3 , 1280)
cap . set (4 , 720)
# cap = cv2 . VideoCapture ( " . . / Videos/motorbikes .mp4" ) # For Video
model = YOLO( "C:/ Users/906Au/PycharmProjects/RecycleProg/best . pt " )
classNames = [ " Cans " , " paper − v1 2023−02−04 11−49pm" , " p l a s t i c b o t t l e " ]
prev_frame_time = 0
new_frame_time = 0
while True :
new_frame_time = time . time ( )
success , img = cap . read ( )
r e s u l t s = model (img , stream=True )
Figure A1. Code implementation for importing essential libraries and loading training
images, including Ultralytics for model integration and cv2 for video capture.
from pylogix import PLC
with PLC( ) as comm:
comm. IPAddress = ’ 1 9 2 . 0 . 0 . 2 0 6 ’
# ret = comm. Read ( ’ paper_out ’)
# print ( ret . TagName, r et . Value , re t . Status )
while ( True ) :
tags = comm. GetTagList ( )
for t in tags . Value :
print ( "Tag : " , t . TagName, t . DataType )
ret = comm. Write ( " Program : MainProgram . paper_out " , 2)
# print ( ret . TagName, re t . Value , r et . Status )
ret = comm. Read ( " Program : MainProgram . Test " )
print ( ret . TagName, r et . Value , r e t . Status )
Figure A2. Code showcasing Pylogix integration for PLC communication, including IP
setup and tag operations (retrieve, write, read).
Figure A1. Code implementation for importing essential libraries and loading training images,
including Ultralytics for model integration and cv2 for video capture.
ANN Artificial Neural Network mAP Mean Average Precision
CIoU Complete Intersection over Union ML Machine Learning
CNN Convolutional Neural Network MSW Municipal Solid Waste
DFL Distribution Focal Loss PC Personal Computer
GDP Gross Domestic Product PLC Programmable Logic Controller
HMI Human-Machine Interface SDG Sustainable Development Goals
IoU Intersection over Union YOLO You Only Look Once
Appendix A
from u l t r a l y t i c s import YOLO
import cv2
import cvzone
import math
import time
# # Training code .
# i f __name__ == ’ __main__ ’ :
# model = YOLO( ’ yolov8n . pt ’)
# # model = YOLO( ’COCO. yaml ’)
# model . t r a i n ( data =" config . yaml " , epochs =50 , imgsz=736 , batch =36)
# # model . t r a i n ( data =" config . yaml " , epochs =50 , imgsz =1024 , batch =16)
cap = cv2 . VideoCapture ( 0 ) # For Webcam
cap . set (3 , 1280)
cap . set (4 , 720)
# cap = cv2 . VideoCapture ( " . . / Videos/motorbikes .mp4" ) # For Video
model = YOLO( "C:/ Users/906Au/PycharmProjects/RecycleProg/best . pt " )
classNames = [ " Cans " , " paper − v1 2023−02−04 11−49pm" , " p l a s t i c b o t t l e " ]
prev_frame_time = 0
new_frame_time = 0
while True :
new_frame_time = time . time ( )
success , img = cap . read ( )
r e s u l t s = model (img , stream=True )
Figure A1. Code implementation for importing essential libraries and loading training
images, including Ultralytics for model integration and cv2 for video capture.
from pylogix import PLC
with PLC( ) as comm:
comm. IPAddress = ’ 1 9 2 . 0 . 0 . 2 0 6 ’
# ret = comm. Read ( ’ paper_out ’)
# print ( ret . TagName, ret . Value , ret . Status )
while ( True ) :
tags = comm. GetTagList ( )
for t in tags . Value :
print ( "Tag : " , t . TagName, t . DataType )
ret = comm. Write ( " Program : MainProgram . paper_out " , 2)
# print ( ret . TagName, ret . Value , ret . Status )
ret = comm. Read ( " Program : MainProgram . Test " )
print ( ret . TagName, ret . Value , ret . Status )
Figure A2. Code showcasing Pylogix integration for PLC communication, including IP
setup and tag operations (retrieve, write, read).
Figure A2. Code showcasing Pylogix integration for PLC communication, including IP setup and tag
operations (retrieve, write, read).
References
1. UN Environment Programme & ISWA. Global Waste Management Outlook 2024—Beyond an Age of Waste: Turning Rubbish
into a Resource. Technical Report, United Nations Environment Programme. 2024. Available online: https://wedocs.unep.org/
20.500.11822/44939 (accessed on 9 October 2024).
2. Singh, M. Solid waste management in urban India: imperatives for improvement. J. Contemp. Issues Bus. Gov. 2019, 25, 87–92.
3. MacArthur, E. Towards the circular economy. J. Ind. Ecol. 2013, 2, 23–44.
4. Kirchherr, J.; Reike, D.; Hekkert, M. Conceptualizing the circular economy: An analysis of 114 definitions. Resour. Conserv. Recycl.
2017, 127, 221–232. [CrossRef]
5. Material Economics. The Circular Economy—A Powerful Force for Climate Mitigation; Material Economics Sverige AB: Stockholm,
Sweden, 2018.
Appl. Sci. 2025, 15, 1550 25 of 26
6. Bozma, H.I.; Yalçın, H. Visual processing and classification of items on a moving conveyor: A selective perception approach.
Robot.-Comput.-Integr. Manuf. 2002, 18, 125–133. [CrossRef]
7. Kazemi, S.; Kharrati, H. Visual Processing and Classification of Items on Moving Conveyor with Pick and Place Robot using PLC.
Intell. Ind. Syst. 2017, 3, 15–21. [CrossRef]
8. Wilts, H.; Garcia, B.R.; Garlito, R.G.; Gómez, L.S.; Prieto, E.G. Artificial intelligence in the sorting of municipal waste as an enabler
of the circular economy. Resources 2021, 10, 28. [CrossRef]
9. Lubongo, C.; Bin Daej, M.A.; Alexandridis, P. Recent developments in technology for sorting plastic for recycling: The emergence
of artificial intelligence and the rise of the robots. Recycling 2024, 9, 59. [CrossRef]
10. Cheng, T.; Kojima, D.; Hu, H.; Onoda, H.; Pandyaswargo, A.H. Optimizing Waste Sorting for Sustainability: An AI-Powered
Robotic Solution for Beverage Container Recycling. Sustainability 2024, 16, 10155. [CrossRef]
11. Feng, Z.; Yang, J.; Chen, L.; Chen, Z.; Li, L. An intelligent waste-sorting and recycling device based on improved EfficientNet. Int.
J. Environ. Res. Public Health 2022, 19, 15987. [CrossRef] [PubMed]
12. Nwokediegwu, Z.Q.S.; Ugwuanyi, E.D.; Dada, M.A.; Majemite, M.T.; Obaigbena, A. AI-driven waste management systems: A
comparative review of innovations in the USA and Africa. Eng. Sci. Technol. J. 2024, 5, 507–516. [CrossRef]
13. Strollo, E.; Sansonetti, G.; Mayer, M.C.; Limongelli, C.; Micarelli, A. An AI-Based approach to automatic waste sorting. In
Proceedings of the HCI International 2020-Posters: 22nd International Conference, HCII 2020, Copenhagen, Denmark, 19–24 July
2020; Proceedings, Part I 22; Springer International Publishing: Cham, Switzerland, 2020; pp. 662–669.
14. Jacobsen, R.M.; Johansen, P.S.; Bysted, L.B.L.; Skov, M.B. Waste wizard: Exploring waste sorting using AI in public spaces. In
Proceedings of the 11th Nordic Conference on Human–Computer Interaction: Shaping Experiences, Shaping Society (NordiCHI
’20), Tallinn, Estonia, 2020; pp. 1–11.
15. Mohammed, M.A.; Abdulhasan, M.J.; Kumar, N.M.; Abdulkareem, K.H.; Mostafa, S.A.; Maashi, M.S.; Khalid, L.S.; Abdulaali,
H.S.; Chopra, S.S. Automated waste-sorting and recycling classification using artificial neural network and features fusion: A
digital-enabled circular economy vision for smart cities. Multimed. Tools Appl. 2023, 82, 39617–39632. [CrossRef] [PubMed]
16. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans.
Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [CrossRef]
17. Shennib, F.; Schmitt, K. Data-driven technologies and artificial intelligence in circular economy and waste management systems:
a review. In Proceedings of the 2021 IEEE International Symposium on Technology and Society (ISTAS), Waterloo, ON, Canada,
28–31 October 2021; pp. 1–5.
18. Melinte, D.O.; Travediu, A.M.; Dumitriu, D.N. Deep convolutional neural networks object detector for real-time waste identifica-
tion. Appl. Sci. 2020, 10, 7301. [CrossRef]
19. Wu, N.; Wang, G.; Jia, D. A Hybrid Model for Household Waste Sorting (HWS) Based on an Ensemble of Convolutional Neural
Networks. Sustainability 2024, 16, 6500. [CrossRef]
20. Chauhan, R.; Shighra, S.; Madkhali, H.; Nguyen, L.; Prasad, M. Efficient Future Waste Management: A Learning-Based Approach
with Deep Neural Networks for Smart System (LADS). Appl. Sci. 2023, 13, 4140. [CrossRef]
21. Abuhejleh, A.A.; Alafeshat, M.Z.; Almtireen, N.; Elmoaqet, H.; Ryalat, M.; AlAjlouni, M.M. Recyclable Waste Categorization with
Transfer Learning. In Proceedings of the 2024 22nd International Conference on Research and Education in Mechatronics (REM),
Amman, Jordan, 24–26 September 2024; pp. 343–348.
22. Huang, K.; Lei, H.; Jiao, Z.; Zhong, Z. Recycling Waste Classification Using Vision Transformer on Portable Device. Sustainability
2021, 13, 11572. [CrossRef]
23. Bhattacharya, S.; Kumar, A.; Krishav, K.; Panda, S.; Vidhyapathi, C.; Sundar, S.; Karthikeyan, B. Self-Adaptive Waste Management
System: Utilizing Convolutional Neural Networks for Real-Time Classification. Eng. Proc. 2024, 62, 5. [CrossRef]
24. Abdu, H.; Noor, M.H.M. A survey on waste detection and classification using deep learning. IEEE Access 2022, 10, 128151–128165.
[CrossRef]
25. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788.
26. Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLOv8. Ultralytics Documentation 2023. Available online: https://github.com/
ultralytics/ultralytics (accessed on 5 October 2024).
27. Choi, J.; Lim, B.; Yoo, Y. Advancing plastic waste classification and recycling efficiency: Integrating image sensors and deep
learning algorithms. Appl. Sci. 2023, 13, 10224. [CrossRef]
28. Wen, S.; Yuan, Y.; Chen, J. A vision detection scheme based on deep learning in a waste plastics sorting system. Appl. Sci. 2023,
13, 4634. [CrossRef]
29. Nandre, C.; Yazbec, E.; Urunkar, P.; Motey, S.; Hazaveh, P.; Rawashdeh, N.A. Robot Vision-based Waste Recycling Sorting
with PLC as Centralized Controller. In Proceedings of the 2023 15th International Conference on Computer and Automation
Engineering (ICCAE), Sydney, Australia, 3–5 March 2023; pp. 381–384.
Appl. Sci. 2025, 15, 1550 26 of 26
30. Kokoulin, A.N.; Uzhakov, A.A.; Tur, A.I. The Automated Sorting Methods Modernization of Municipal Solid Waste Processing
System. In Proceedings of the 2020 International Russian Automation Conference (RusAutoCon), Sochi, Russia, 6–12 September
2020; pp. 1074–1078. [CrossRef]
31. Zulfiqar, R.; Mehdi, B.; Iftikhar, R.; Khan, T.; Zia, R.; Saud, N. PLC Based Automated Object Sorting System. In Proceedings of the
2019 4th International Electrical Engineering Conference (IEEC 2019), Singapore, 25–28 November 2019.
32. Koskinopoulou, M.; Raptopoulos, F.; Papadopoulos, G.; Mavrakis, N.; Maniadakis, M. Robotic waste sorting technology: Toward
a vision-based categorization system for the industrial robotic separation of recyclable waste. IEEE Robot. Autom. Mag. 2021,
28, 50–60. [CrossRef]
33. Wu, Q.; Wang, N.; Fang, H.; He, D. A novel object detection method to facilitate the recycling of waste small electrical and
electronic equipment. J. Mater. Cycles Waste Manag. 2023, 25, 2861–2869. [CrossRef]
34. Vinodha, D.; Sangeetha, J.; Sherin, B.C.; Renukadevi, M. Smart garbage system with garbage separation using object detection.
Int. J. Res. Eng. Sci. Manag. 2020, 3, 779–782.
35. Wahyutama, A.B.; Hwang, M. YOLO-based object detection for separate collection of recyclables and capacity monitoring of
trash bins. Electronics 2022, 11, 1323. [CrossRef]
36. Sayem, F.R.; Islam, M.S.B.; Naznine, M.; Nashbat, M.; Hasan-Zia, M.; Kunju, A.K.A.; Khandakar, A.; Ashraf, A.; Majid, M.E.;
Kashem, S.B.A.; et al. Enhancing waste sorting and recycling efficiency: robust deep learning-based approach for classification
and detection. Neural Comput. Appl. 2024, 1–17. [CrossRef]
37. Al-Mashhadani, I.B. Waste material classification using performance evaluation of deep learning models. J. Intell. Syst. 2023,
32, 20230064. [CrossRef]
38. Kocher, E.; Ochieze, C.G.; Oumar, A.; Winter, T.; Sergeyev, A.; Gauthier, M.; Rawashdeh, N. A Smart Parallel Gripper Industrial
Automation System for Measurement of Gripped Workpiece Thickness. In Proceedings of the 2022 Conference for Industry and
Education Collaboration (CIEC), Tempe, AZ, USA, 9–11 February 2022.
39. Valaboju, C.; Amuda, M.; NCh, S.C.; Reddy, S.; Hazaveh, P.K.; Rawashdeh, N.A. A Supervisory Control System for a Mechatronics
Assembly Station. In Proceedings of the 15th International Conference on Computer and Automation Engineering (ICCAE 2023),
Sydney, Australia, 3–5 March 2023; pp. 503–507.
40. Liu, Z.; Johnston, C.; Leino, A.; Winter, T.; Sergeyev, A.; Gauthier, M.; Rawashdeh, N. An Industrial Pneumatic and Servo
Four-axis Robotic Gripper System: Description and Unitronics Ladder Logic Programming. In Proceedings of the 2022 American
Society for Engineering Education (ASEE) Conference for Industry and Education Collaboration (CIEC), Tempe, AZ, USA, 4–11
February 2022; pp. 9–11.
41. Reyes, A.; Reinhardt, S.; Wise, T.; Rawashdeh, N.; Paheding, S. Gesture Controlled Collaborative Robot Arm and Lab Kit. In
Proceedings of the 2022 American Society for Engineering Education (ASEE) Conference for Industry and Education Collaboration
(CIEC), Tempe, AZ, USA, 4–11 February 2022.
42. Piechocki, B.; Spitzner, C.; Karanam, N.; Winter, T.; Sergeyev, A.; Gauthier, M.; Rawashdeh, N. Operation of a Controllable
Force-sensing Industrial Pneumatic Parallel Gripper System. In Proceedings of the 2022 American Society for Engineering
Education (ASEE) Conference for Industry and Education Collaboration (CIEC), Tempe, AZ, USA, 4–11 February 2022; pp. 9–11.
43. Salameen, L.; Estatiah, A.; Darbisi, S.; Tutunji, T.A.; Rawashdeh, N.A. Interfacing Computing Platforms for Dynamic Control and
Identification of an Industrial KUKA Robot Arm. In Proceedings of the 2020 21st International Conference on Research and
Education in Mechatronics (REM), Cracow, Poland, 9–11 December 2020; pp. 1–5. [CrossRef]
44. Alghamdi, B.; Lee, D.; Schaeffer, P.; Stuart, J. An Integrated Robotic System: 2D-Vision Based Inspection Robot with Automated
PLC Conveyor System. In Proceedings of the 4th International Conference of Control, Dynamic Systems, and Robotics (CDSR’17),
Toronto, ON, Canada, 21–23 August 2017; pp. 21–23.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

More Related Content

PDF
IRJET- Sustainable Approach for Development of an Ecocity
PPTX
ultratech esg PRESENATION ESG presentation.pptx
PDF
RECYCLING SIGNIFICANCE OF WASTE MANAGEMENT
PDF
Urbanization: Smart Urban Transit System using Artificial Intelligence
PDF
Environment Sustainability in the Age of Digital Revolution: A Review of the ...
PDF
Improving the Effectiveness of Wastewater Treatment Systems to Enhance Water ...
PPTX
Cities and Climate Change - Cities and climate change mitigation
PDF
Advancing NAPs after Paris: ICT sector contribution
IRJET- Sustainable Approach for Development of an Ecocity
ultratech esg PRESENATION ESG presentation.pptx
RECYCLING SIGNIFICANCE OF WASTE MANAGEMENT
Urbanization: Smart Urban Transit System using Artificial Intelligence
Environment Sustainability in the Age of Digital Revolution: A Review of the ...
Improving the Effectiveness of Wastewater Treatment Systems to Enhance Water ...
Cities and Climate Change - Cities and climate change mitigation
Advancing NAPs after Paris: ICT sector contribution

Similar to PLC-Controlled Intelligent Conveyor System with AI-Enhanced Vision for Efficient Waste Sorting (20)

PDF
Advancing NAPs after Paris: ICT sector contribution
PDF
Implementing Integrated Solid Waste Management: A Case Study of Domestic Wast...
PPT
Business and Sustainable Development - The Green Race is On
PDF
THERMAL PROPERTIES OF INDIAN MUNICIPAL SOLID WASTE OVER THE PAST, PRESENT AND...
PDF
A Review On Smart Garbage Bin Monitoring System
PDF
IRJET- Sustainable Solid Waste Management: A Decentralized Waste Management A...
PDF
IRJET- Automation of Smart Waste Management using IoT
PPT
Lcs 2050 presented for scg
PPTX
2018 GGSD Forum - Opening Session - Keynote Presentation
PDF
Low Carbon China - Innovation Beyond Efficiency
PDF
Need of sustainable development and related issues pertaining to process indu...
PPSX
Eco City Development towards Developing Low Carbon Society
PDF
IRJET - Municipal Waste Handling using IoT
PDF
Atlantic Waste Recycling Innovations in Sustainable Waste Management
PPTX
Green economy
PDF
greeneconomy-150306180005-conversion-gate01.pdf
PDF
Financial analysis of electricity generation from municipal solid waste: a ca...
PDF
Ericsson Mobility Report, November 2015 - ICT and the low carbon economy
PDF
Collection of-organic-and-inorganic-garbage-in-smart-city-in-pakistan-for-ren...
PDF
Information and Communications Technology to Manage Climate Risks and Emissions
Advancing NAPs after Paris: ICT sector contribution
Implementing Integrated Solid Waste Management: A Case Study of Domestic Wast...
Business and Sustainable Development - The Green Race is On
THERMAL PROPERTIES OF INDIAN MUNICIPAL SOLID WASTE OVER THE PAST, PRESENT AND...
A Review On Smart Garbage Bin Monitoring System
IRJET- Sustainable Solid Waste Management: A Decentralized Waste Management A...
IRJET- Automation of Smart Waste Management using IoT
Lcs 2050 presented for scg
2018 GGSD Forum - Opening Session - Keynote Presentation
Low Carbon China - Innovation Beyond Efficiency
Need of sustainable development and related issues pertaining to process indu...
Eco City Development towards Developing Low Carbon Society
IRJET - Municipal Waste Handling using IoT
Atlantic Waste Recycling Innovations in Sustainable Waste Management
Green economy
greeneconomy-150306180005-conversion-gate01.pdf
Financial analysis of electricity generation from municipal solid waste: a ca...
Ericsson Mobility Report, November 2015 - ICT and the low carbon economy
Collection of-organic-and-inorganic-garbage-in-smart-city-in-pakistan-for-ren...
Information and Communications Technology to Manage Climate Risks and Emissions
Ad

Recently uploaded (20)

PPTX
Current and future trends in Computer Vision.pptx
PDF
737-MAX_SRG.pdf student reference guides
PPTX
tack Data Structure with Array and Linked List Implementation, Push and Pop O...
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PDF
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
PDF
August -2025_Top10 Read_Articles_ijait.pdf
PDF
Visual Aids for Exploratory Data Analysis.pdf
PPTX
Feature types and data preprocessing steps
PPTX
"Array and Linked List in Data Structures with Types, Operations, Implementat...
PPTX
CyberSecurity Mobile and Wireless Devices
PPTX
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
PDF
Abrasive, erosive and cavitation wear.pdf
PPTX
Information Storage and Retrieval Techniques Unit III
PDF
BIO-INSPIRED ARCHITECTURE FOR PARSIMONIOUS CONVERSATIONAL INTELLIGENCE : THE ...
PDF
Improvement effect of pyrolyzed agro-food biochar on the properties of.pdf
PPTX
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
PPTX
Module 8- Technological and Communication Skills.pptx
PPTX
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
PPTX
Chemical Technological Processes, Feasibility Study and Chemical Process Indu...
PPT
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
Current and future trends in Computer Vision.pptx
737-MAX_SRG.pdf student reference guides
tack Data Structure with Array and Linked List Implementation, Push and Pop O...
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
August -2025_Top10 Read_Articles_ijait.pdf
Visual Aids for Exploratory Data Analysis.pdf
Feature types and data preprocessing steps
"Array and Linked List in Data Structures with Types, Operations, Implementat...
CyberSecurity Mobile and Wireless Devices
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
Abrasive, erosive and cavitation wear.pdf
Information Storage and Retrieval Techniques Unit III
BIO-INSPIRED ARCHITECTURE FOR PARSIMONIOUS CONVERSATIONAL INTELLIGENCE : THE ...
Improvement effect of pyrolyzed agro-food biochar on the properties of.pdf
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
Module 8- Technological and Communication Skills.pptx
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
Chemical Technological Processes, Feasibility Study and Chemical Process Indu...
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
Ad

PLC-Controlled Intelligent Conveyor System with AI-Enhanced Vision for Efficient Waste Sorting

  • 1. Academic Editors: Pedro Couto and Andrea Prati Received: 15 December 2024 Revised: 18 January 2025 Accepted: 31 January 2025 Published: 3 February 2025 Citation: Almtireen, N.; Reddy, V.; Sutton, M.; Nedvidek, A.; Karn, C.; Ryalat, M.; Elmoaqet, H.; Rawashdeh, N. PLC-Controlled Intelligent Conveyor System with AI-Enhanced Vision for Efficient Waste Sorting. Appl. Sci. 2025, 15, 1550. https:// doi.org/10.3390/app15031550 Copyright: © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/). Article PLC-Controlled Intelligent Conveyor System with AI-Enhanced Vision for Efficient Waste Sorting Natheer Almtireen 1,* , Viraj Reddy 2, Max Sutton 2, Alexander Nedvidek 2 , Caden Karn 2, Mutaz Ryalat 1 , Hisham Elmoaqet 1 and Nathir Rawashdeh 1,2,* 1 Department of Mechatronics Engineering, German Jordanian University, Amman 11180, Jordan; mutaz.ryalat@gju.edu.jo (M.R.); hisham.elmoaqet@gju.edu.jo (H.E.) 2 Department of Applied Computing, Michigan Technological University, Houghton, MI 49931, USA; greddy1@mtu.edu (V.R.); maxsutto@mtu.edu (M.S.); adnedvid@mtu.edu (A.N.); ctkarn@mtu.edu (C.K.) * Correspondence: natheer.almtireen@gju.edu.jo (N.A.); nathir.rawashdeh@gju.edu.jo or narawash@mtu.edu (N.R.) Abstract: Current waste sorting mechanisms, particularly those relying on manual pro- cesses, semi-automated systems, or technologies without Artificial Intelligence (AI) inte- gration, are hindered by inefficiencies, inaccuracies, and limited scalability, reducing their effectiveness in meeting growing waste management demands. This study introduces a prototype waste sorting machine that integrates an AI-driven vision system with a Pro- grammable Logic Controller (PLC) for high-accuracy automated waste sorting. The system, powered by the YOLOv8 deep learning model, achieved sorting accuracies of 88% for metal cans, 75% for paper, and 91% for plastic bottles, with an overall precision of 90%, a recall of 80%, and a mean average precision (mAP50) of 86%. The vision system provides real-time classification, while the PLC manages conveyor and actuator operations to ensure seamless sorting. Experimental results in a controlled environment validate the system’s high accuracy, minimal processing delays, and scalability for industrial recycling appli- cations. This innovative integration of AI vision with PLC automation enhances sorting efficiency, reduces ecological impacts, and minimizes labor dependency. Furthermore, the system aligns with sustainable waste management practices, promoting circular economy principles and advancing the Sustainable Development Goals (SDGs). Keywords: waste sorting; machine vision; sustainability; PLC; green technology; artificial intelligence 1. Introduction and Background 1.1. Global Challenges in Waste Management In recent years, the demand for effective waste management has grown due to chal- lenges related to the increasing volume of Municipal Solid Waste (MSW). The interplay of rapid urbanization and industrialization, along with inadequate waste management practices, has markedly exacerbated various environmental issues. Global waste generation is increasing at an alarming rate and shows no indication of a decline. In 2020, the world produced approximately 2.1 billion tons of MSW annually, a figure that is expected to increase by 56% by 2050, reaching 3.8 billion tons if no urgent measures are implemented; refer to Figure 1. This dramatic increase in waste generation places immense pressure on governments and authorities around the world to implement sustainable waste manage- ment practices that effectively mitigate environmental damage while addressing the social and economic implications of waste generation [1]. Appl. Sci. 2025, 15, 1550 https://doi.org/10.3390/app15031550
  • 2. Appl. Sci. 2025, 15, 1550 2 of 26 2.126 2.684 3.229 3.782 0 0.5 1 1.5 2 2.5 3 3.5 4 Waste (Billions of tonnes) 2020 2030 2040 2050 Figure 1. Projected global MSW generation (in billions of tons) for the years 2020, 2030, 2040, and 2050, if immediate and effective waste management interventions are not implemented [1]. Currently, the demand for effective waste management has grown due to the chal- lenges related to the increasing volume of recyclable materials. Rapid urbanization and industrialization have resulted in a dramatic increase in MSW, and the inadequate han- dling of these wastes contributes to significant environmental issues such as air and water pollution, loss of biodiversity, and climate change [1]. Countries undergoing urbanization and industrialization, with shifts in housing and consumption patterns along with access to a broader array of products, leads to an increase in per capita waste production. This trend is illustrated in Figure 2, which shows a clear relationship between higher Gross Domestic Product (GDP) and increased waste generation per capita [1]. 1000 500 0 0 30,000 60,000 90,000 120,000 GDPper capita (purchasing power parity – constant 2017 international USD) Waste (kg/person/year) 1500 Figure 2. Relationship between GDP per capita (purchasing power parity–constant 2017 international USD) and waste generation (kg/person/year) available between 2010 and 2020 [1]. The United States, for example, generated approximately 12% of global MSW in 2018, despite representing less than 5% of the global population. This places the United States among the highest per capita waste producers, with an average American generating more than 800 kilograms of waste per year. This disproportionate waste generation in wealthy countries reflects the impact of economic growth, industrialization, urbanization, and consumption and housing patterns on increasing waste volumes [1]. In addition, disparities in the generation of waste per capita between regions highlight the inequality in waste generation.
  • 3. Appl. Sci. 2025, 15, 1550 3 of 26 Figure 3 indicates that although North America and Central and South Asia gener- ate similar total amounts of MSW per capita, waste production varies significantly. This suggests that economic factors and population growth are key drivers of MSW generation. As urbanization accelerates in middle-income nations, the challenges of waste management intensify, particularly in areas without adequate waste disposal systems [1]. Singh et al. [2] highlighted the main shortcomings in India’s urban approach to solid waste management, including overloaded landfills, inefficient waste segregation, and inadequate policy en- forcement, highlighting the need for technological innovations and greater participation of the private sector to improve waste processing and recycling efficiency. 700 300 500 400 200 100 0 2.5 600 2.0 1.5 1.0 0.5 0.0 Total municipal solid waste (million tonnes) Municipal solid waste per capita (kg/person/day) Total municipal solid waste (million tonnes) Municipal solid waste per capita (kg/person/day) Figure 3. Regional MSW generation: total MSW (million tons) and MSW per capita (kg/person/day) [1]. 1.2. Towards Sustainable Waste Management Analyzing disparities in the generation of waste per capita highlights the significant role of waste management in global sustainability. This underscores the necessity of efficient waste management systems, which are crucial for environmental and social equity and are also essential for improving recycling efficiency and minimizing dependence on landfills. This aligns with social and environmental justice, particularly in line with the Sustainable Development Goals (SDGs) [1]. Effective waste management driven by public authorities, corporations, and civil society affects SDG 11 on sustainable cities, SDG 12 on responsible consumption, and SDG 13 on climate action. Inadequate waste management exacerbates inequality and impacts vulnerable populations through pollution and poor infrastructure. Hence, waste management plays an essential role in achieving multiple SDGs, emphasizing the integration of sustainability into waste governance [1]. These insights highlight the necessity of enhancing the circular economy transition, using Artificial Intelligence (AI) and automation technologies to achieve these goals. The circular economy concept focuses on preserving the value and quality of products, components, and materials beyond their initial use phase [3]. The momentum behind the circular economy has grown among scholars and practitioners, although varied in- terpretations persist. Kirchherr et al. [4] highlighted that the circular economy is often characterized by reducing, reusing, and recycling activities. Shifting from our existing linear production and consumption models to a circular economy is considered vital to achieving sustainability objectives and sustaining industrial competitiveness. This is a powerful tool for climate change mitigation by significantly reducing carbon emissions in heavy industries such as steel, plastics, aluminum, and cement, which are responsible for
  • 4. Appl. Sci. 2025, 15, 1550 4 of 26 substantial greenhouse gas emissions [5]. A report by Material Economics [5] suggested that by making the best use of existing materials, the European Union could achieve annual CO2 reductions of up to 296 million tons by 2050, substantially contributing to its target of net zero emissions. 1.3. Sensor-Based Systems to AI: Advancing Waste Sorting Traditional automated sorting systems, which predominantly utilize programmable logic controllers (PLCs) linked to sensor networks, face challenges in processing mixed or unorganized waste streams. PLCs, extensively used in the domain of industrial automa- tion, conventionally employ devices such as proximity sensors, limit switches, and push buttons to obtain electrical input from the physical environment. Although these sensors exhibit a high degree of efficiency in detecting the presence and proximity of objects, they demonstrate limited capability to differentiate between different materials such as plastic, metal, and paper. In mixed waste streams, the inability to distinguish between dissimilar materials obstructs the sorting process, thereby complicating the precise segregation of recy- clable waste. In addition, the diversity of geometries, dimensions, or constituent materials presented by such waste streams makes conventional sensor-based systems insufficient for current requirements. Bozma et al. [6] concluded that conventional sensor-based systems are ineffective in classifying objects with arbitrary shapes and orientations in industrial settings, which requires sophisticated vision systems capable of real-time classification, especially for complex sorting operations. Kazemi et al. [7] demonstrated the effectiveness of integrating vision systems with PLCs for industrial pick-and-place robots, highlighting improvements in object detection accuracy, real-time processing, and the flexibility of PLC-controlled robotic systems in sorting tasks. One of the critical challenges in modern waste management is efficiently sorting and recycling waste, particularly in urban and industrial settings, where the volume and complexity of waste streams continue to grow. Through their study on the influence of AI in waste sorting, Wilts et al. [8] illustrated that integrating AI into waste management systems considerably improves the efficiency of recycling and the purity of the material. The “ZRR for municipal waste project” underscores the tangible effects of AI on operational efficiency and safety, thus highlighting its potential to advance a sustainable circular economy. The use of AI and Machine Learning (ML) has shown promise in automating this process and enhancing the accuracy of the sorting, as demonstrated by several recent studies. Lubongo et al. [9] reviewed recent advances in the integration of ML and AI with spectroscopic methods and machine vision for the classification of plastic waste. Their investigation highlights advances in automated technologies that can achieve up to 99.99% sorting accuracy and processing speeds of 10 tons per hour [9]. Although challenges persist in the classification of complex films and plastics, AI-driven systems have significantly improved accuracy and throughput, indicating a positive trajectory for advancing recycling technologies. Cheng et al. [10] investigated an AI-powered sorting robot that utilizes image recognition and advanced grippers to classify containers such as PET bottles and colored glass to tackle Japan’s recycling challenges. Feng et al. [11] designed an intelligent waste bin powered by GECM-EfficientNet, achieving above 94% accuracy in waste classification on two datasets. Machine vision systems, particularly those that use deep learning algorithms, can classify objects based on visual data, enabling more accurate real-time sorting [12]. Strollo et al. [13] developed a system for the automatic classification of waste materi- als using computer vision and ML techniques. This system utilizes both Near-Infrared (NIR) and RGB cameras to analyze and identify incoming objects, achieving substantial
  • 5. Appl. Sci. 2025, 15, 1550 5 of 26 classification accuracy. Similarly, Jacobsen et al. [14] explored the application of AI in public waste sorting through an automated bin system, the “Waste Wizard”, deployed in public spaces such as zoos and retail stores. Their findings indicate that public participation, knowledge, and attitudes play an important role in waste sorting behavior, highlighting how user interaction with AI-powered systems can promote greater recycling efficacy in public contexts. In large-scale applications, Wilts et al. [8] investigated the use of robotic AI systems in municipal waste sorting facilities to replace or supplement manual sorting. Their re- search demonstrated improvements in recycling rates and the purity of sorted materials, which highlights the potential scalability of robotic AI solutions in industrial environ- ments. This study serves as a valuable reference for the scalability and operational effi- ciency that our proposed prototype intends to achieve. Further advancing this domain, Mohammed et al. [15] presented an AI-based digital model customized for smart cities, designed to automatically classify waste according to the principles of the circular econ- omy. Using an artificial neural network (ANN) and feature fusion techniques, their model achieved an accuracy of 91.7% in various waste categories, showcasing the role of AI in the solution to urban waste challenges and meeting recycling requirements. These studies collectively establish a foundation for the integration of AI in automated waste sorting, supporting the design and scalability of sustainable solutions in diverse settings. Convolutional Neural Networks (CNNs) have demonstrated a significant level of effectiveness in the domain of image classification tasks. This achievement is largely due to their ability to learn intricate patterns and features from extensive datasets, enabling them to generalize and accurately identify new and previously unseen objects. These systems, such as RecycleNet, using Mask R-CNN algorithms [16], achieved a 90.2% accuracy in the classification of recyclable materials, including aluminum, paper, PET bottles, and nylon [17]. This resulted in improved recycling efficiency by automating the processes for classifying and retrieving materials. These systems have been successfully implemented in a range of applications, including industrial automation and recycling tasks. Various research works in the literature have highlighted the application of CNNs for waste management systems. Melinte et al. [18] presented enhanced CNN-based object detectors for municipal waste detection on a single autonomous robotic system, achieving an accuracy of 97.63% with SSD and 95.76% with Faster R-CNN. Wu et al. [19] introduced a hybrid ensemble CNN model boosted by Multi-Scale Local Binary Pattern (MLBP) and Simulated Annealing (SA) for optimization, significantly enhancing accuracy to 99.01% on TrashNet and 99.41% on HGCD datasets. Chauhan et al. [20] developed an enhanced waste management system leveraging a CNN-based image classifier to automate waste sorting, resulting in notable efficiency and cost reductions. Their model outperformed traditional models such as AlexNet, VGG16, and ResNet34, with a classification accuracy of 94.53%. Abuhejleh et al. [21] employed a CNN-based method with transfer learning to categorize recyclable materials, achieving 95% accuracy for categories such as plastic, paper, and metal. Huang et al. [22] introduced a vision transformer-based method that enhances waste classification by utilizing self-attention mechanisms, achieving a top accuracy of 96.98% on the TrashNet dataset and outperforming traditional CNNs by effectively addressing global information within images. Bhattacharya et al. [23] presented the Self-Adaptive Waste Management System (SAWMS), which employs CNNs and conveyor systems for real-time sorting. It is capable of adapting and self-training to maintain accuracy with changing waste compositions. Abdu et al. [24] conducted a thorough review of deep learning techniques applied in waste management, pointing out current research gaps and future potential. The study highlighted the limited availability of comprehensive analyses and application-specific
  • 6. Appl. Sci. 2025, 15, 1550 6 of 26 datasets for image classification and object detection in waste tasks. It evaluated more than twenty datasets and critiqued existing methodologies while suggesting future research paths [24]. However, the YOLO (You Only Look Once) deep learning model has emerged as one of the most widely adopted object detection frameworks, known for its exceptional speed and accuracy in real-time applications [25]. Ultralytics YOLOv8 [26] represents a significant advance, mainly due to extensive optimizations that have markedly improved the model’s proficiency in object categorization [26]. These refinements make it particularly suitable for scenarios that require rapid and accurate decision-making, as commonly re- quired in automated sorting systems. The latest iteration, YOLOv11, further advances these capabilities, providing state-of-the-art performance in a variety of applications, including detection, segmentation, pose estimation, and tracking. In waste sorting systems, the YOLO architecture can be trained to classify different categories of recyclables, such as plastic bottles, metal cans, and paper, even in situations where these objects are combined or displayed in chaotic or unstructured settings, enabling more efficient recycling processes. Several studies used the YOLO deep learning model for waste classification systems. Choi et al. [27] enhanced plastic waste sorting by integrating image sensors with the YOLO deep learning model, achieving more than 91.7% accuracy in distinguishing plastics such as PET from PET-G. Wen et al. [28] introduced a sorting approach that combines the YOLOX detection model with the DeepSORT tracking algorithm for rapid real-time plastic waste sorting. This approach leverages visual detection to increase sorting accuracy and efficiency, demonstrating considerable flexibility and potential in complex sorting scenarios. Several studies have demonstrated the potential of combining AI vision systems with PLCs to improve waste sorting efficiency. Nandre et al. [29] explored a vision-based recycling system controlled by a PLC, emphasizing the benefits of AI in the accurate detection of recyclable materials. Similarly, Kokoulin et al. [30] explored modern sorting techniques that combine AI with conventional sorting technologies. The aim was to improve the efficiency and effectiveness of MSW management. These studies suggest that AI and ML algorithms can significantly improve waste classification and segregation compared to traditional sensor-based systems. Multiple research efforts have explored the application of AI, spectrometry, and com- puter vision methods to enhance sorting processes. For example, Zulfiqar et al. [31] introduced a PLC-based object sorting system that uses a combination of inductive, laser, and electromagnetic sensors to sort materials. However, while these methods are promising, they are still limited in handling unsegregated waste and complex material compositions. On the other hand, AI-powered vision systems can be trained to differentiate objects even in mixed, cluttered environments, making them more effective in real-world scenarios [32]. Investigations of AI-driven sorting solutions additionally emphasize the importance of scalability. Given that waste sorting facilities handle significant amounts of materials, achieving efficient scalability is crucial to sustaining high levels of operational throughput over time. Many studies, such as [33–35], have shown that AI models such as YOLO can process multiple objects in real time, offering scalability for industrial applications. This scalability, coupled with the ability to retrain models on new material classes, makes AI-driven sorting systems highly adaptable to future recycling needs. This paper presents an approach to designing a conveyor-based automated recycling sorting machine that integrates a PLC with an AI-driven vision system. Using the YOLOv8 deep learning model, the system can classify and sort paper, plastic, and metal cans with high precision. The AI-driven vision system facilitates instantaneous categorization of materials, whereas the PLC governs the conveyor mechanism and associated actuators. These components work collaboratively to achieve segregation of materials, guided by data input from the AI system. The proposed system has been subjected to an extensive series
  • 7. Appl. Sci. 2025, 15, 1550 7 of 26 of evaluations carried out within a carefully monitored environment. This environment ensures that all variables are precisely controlled and continuously monitored. The results of these evaluations indicate that the system exhibits excellent accuracy and is capable of scaling efficiently to accommodate larger-scale waste sorting operations. Hence, this study proposes the integration of AI and PLC to construct a scalable and effective prototype that is designed to improve waste sorting efficiency. The ultimate objective is to contribute significantly to the advancement of sustainable recycling methodologies. 2. System 2.1. Research Design and Methodology This section outlines the detailed research design and methodology adopted in the development of a PLC-controlled intelligent conveyor system enhanced with AI-driven vi- sion to improve efficiency in waste sorting. The methodologies implemented, especially the seamless integration of AI-powered vision systems with PLC automation, were systemati- cally selected to improve precision and optimize the efficiency of the waste sorting process. The system design employs the YOLOv8 deep learning model, known for its superior accuracy and real-time processing capabilities, ensuring the proficient identification and categorization of various recyclable materials under dynamic operational conditions. De- tailed descriptions of the experimental configurations, including the conveyor framework, camera configurations, and PLC programming, are discussed in the following sections to illustrate the practical implementation of the theoretical design within a controlled setting. The system was validated through an array of tests designed to evaluate its precision, reliability, and scalability in processing various types of material. To verify the robustness of the research design, extensive testing and calibration of AI algorithms and mechanical adjustments were essential for optimized performance. The system components were precisely calibrated, including the selection of detection thresholds and response timing, which are crucial for ensuring precise sorting results and rapid response. The direct correlation between the research design and the system performance metrics presented in the results section underscores the potential of the design to achieve the designated objectives while supporting the conclusions of the study. 2.2. Components The system is composed of several fundamental components, including a camera, conveyor, actuator, PLC, and motor controller. Each of these elements plays an essential role in the effective execution of the object detection and sorting process. The camera chosen for this system implementation is an ONN 1440p desktop webcam, which is equipped with autofocus capabilities and a ring light. The selection of this camera was based on its superior resolution quality combined with its cost-effectiveness, making it an ideal choice to consistently obtain clear images of items as they move along the conveyor belt. The autofocus capability is crucial for the system’s ability to manage changes in the dimen- sions of objects without the need for manual intervention. This feature is indispensable in maintaining a uniform level of detection precision in various scenarios. The camera is installed at a right angle to the conveyor belt, facilitating the acquisition of distinct images of the items as they temporarily halt for examination. These images are subsequently sent to a Personal Computer (PC) through a USB interface, where they undergo analysis by a part-detection algorithm to recognize the items. The conveyor system, illustrated in Figure 4, enables the transport of objects within the system and is driven by a brushless DC motor, also shown in Figure 4, which is controlled by a motor driver that receives signals from the PLC. The velocity and distance traversed by the objects on the conveyor are controlled by the duration for which the motor receives
  • 8. Appl. Sci. 2025, 15, 1550 8 of 26 power. This design facilitates exact regulation of the conveyor’s motion, guaranteeing that the objects halt momentarily in front of the camera for image capture before proceeding along the conveyor belt. The conveyor system is designed to handle the weight and dimensions of recycled materials, ensuring smooth operation. Figure 4. (Left) Conveyor system for object transport. (Right) Oriental Motor BLM460S-GFV2 for precise motion control. The sorting mechanism employs a stepper motor as the actuator, which guides a sepa- rator to direct cans into one container and bottles into another. The precise motion control of the stepper motor ensures that objects are sorted correctly based on their identified types. This sorting process is synchronized with the data received from the image processing system, ensuring real-time adjustments and smooth operation without interruptions. The system’s control architecture is centered around an Allen-Bradley PLC with input/output (I/O) modules and an Ethernet port, which is responsible not only for con- trolling the conveyor and actuator but also for receiving input from the PC. The PLC functions through ladder logic, enabling real-time control and system monitoring. Addi- tionally, it supports remote operation via a Human–Machine Interface (HMI), simplifying the process of monitoring system performance and implementing necessary adjustments. The flexibility of the PLC allows for future expansions or modifications to the system without significant hardware changes. The 60-watt brushless motor driver, shown in Figure 5, simplifies the operation of the conveyor by directly processing input signals (0–5 V) from the PLC. This design eliminates the need for extra control signal conditioning, thereby reducing the complexity of the system and easing the wiring process. The motor controller provides improved scalability and adaptability, allowing the integration of additional components or modifications to system parameters while maintaining reliable performance and reducing energy usage. All of the previously mentioned components are vital in enhancing the system’s efficiency, ensuring that it operates effectively and reliably. Together, they enable precise, real-time object identification and classification. Their combined efforts ensure that the system functions seamlessly and autonomously, enhancing the overall process for optimal efficiency and productivity.
  • 9. Appl. Sci. 2025, 15, 1550 9 of 26 Figure 5. BMUD60-A2 brushless DC motor driver for precise control and efficient operation of the conveyor system. 2.3. Architecture In the previous section, the different elements that constitute the system architecture were outlined. This architecture incorporates several interconnected elements that work together to efficiently detect and sort objects automatically. The system is designed to include a conveyor belt, a camera, a stepper motor, a PLC, an HMI, and a PC. Each component has a unique function within the system, facilitating the exchange of data and control signals to ensure efficient real-time processing and operation. In this prototype, a motor relay controls the conveyor belt, with its direction man- aged via two input ports. These ports determine whether the motor rotates clockwise or anticlockwise, based on the received signals. This control is essential for the conveyor to function properly in moving items through the detection and sorting process. A com- prehensive description of the system components and their connections is illustrated in Figure 6. HMI Figure 6. Schematic of the automated waste sorting system: conveyor belt with a camera, stepper motor, PLC, PC, HMI, and bins for object sorting and categorization. A camera is located at the start of the conveyor belt, where various parts, such as plastic, metal, or paper, are placed. This camera plays a crucial role in taking pictures of items as they enter the system. The captured images are transmitted to the PC for immediate processing and examination. Positioning the camera at the conveyor entry point guarantees
  • 10. Appl. Sci. 2025, 15, 1550 10 of 26 that every item is captured immediately, allowing prompt object recognition. As illustrated in Figure 7, the PC and the PLC are connected by an Ethernet cable, establishing a reliable communication network. This Ethernet connection allows the PC to interact with the PLC using tag addresses to perform data writing and reading operations on the PLC. The PC processes the camera data, identifies the objects, and sends control commands to the PLC. These commands instruct the PLC on how to manage the sorting process. The use of Ethernet provides high-speed communication between the two components, ensuring that real-time control is maintained throughout the process. The PLC is integral to the control architecture of the system. It includes (I/O) ports that facilitate connections to the motor control system, the start and stop buttons, and the HMI. The start and stop buttons provide manual control over the conveyor system, allowing operators to start or stop the process as needed. The motor controller interfaced with the PLC allows the system to regulate the conveyor’s movement with precision, ensuring that items are delivered accurately to their specified containers. The HMI serves as a user interface that allows operators to monitor system status, view process data, and make real-time adjustments as needed. The integration of HMI with PLC provides operators with valuable insight into system performance and enables efficient process control. Once the camera captures the object and the computer algorithm recognizes it, the in- formation is sent via the Ethernet connection to the PLC. The PLC then directs the conveyor belt to transport the items to their designated sorting positions. Once the part reaches the end of the belt, the PLC activates a stepper motor, which is responsible for pushing the object into the appropriate bin. The system employs color-coded bins for different materials (e.g., plastic, metal, and paper), allowing for efficient sorting based on the de- tected object type. The stepper motor provides precise control, ensuring that each object is accurately directed to its designated bin. The PLC facilitates adaptable control and system scalability, and the Ethernet link between the PC and the PLC guarantees swift and stable data transmission, while the HMI improves user interaction with the system, offering real-time feedback and control capabilities. By integrating various components and their associated functionalities, this architectural design establishes an efficient and automated object detection and sorting mechanism, effectively illustrating a viable strategy for automated waste sorting through the integration of multiple technologies to achieve a comprehensive and efficient process. Figure 7. System connection diagram: communication and control links integrating the camera, PC, PLC, motor, and sensors.
  • 11. Appl. Sci. 2025, 15, 1550 11 of 26 2.4. Operation Workflow The automated classification system functions through a sequential procedure that relies on camera input data and is managed by a PLC. Figure 8 illustrates a flowchart detailing the main stages of this procedure, designed to detect, identify, and categorize different waste materials in suitable containers. Version February 1, 2025 submitted to Appl. Sci. 11 of 23 Start Move to Camera Part Present? Identify Part Camera Part is Can? Part is Bottle? Part is Paper? Move to Can Bin Move to Bottle Bin Move to Paper Bin Move Actuator Move to Reject Yes No Yes No Yes No Yes No Figure 8. Flowchart of part classification and sorting processes using camera-based recognition. Subsequently, the system directs these unidentifiable items to a dedicated reject container. 376 This procedural framework improves the efficiency of the automation process, with the 377 PLC managing the logical operations and the stepper motor executing the sorting tasks 378 based on object classification. 379 3. Deep Learning Model & Programming Framework 380 This section describes the deep learning model utilized to train the classification 381 system. It provides a detailed account of both the image classification model and the logic 382 control system integrated into the PLC, as well as its application in the physical prototype. 383 The selection of an appropriate platform for the AI-based vision system was a critical 384 challenge. In this project, the Ultralytics YOLOv8 deep learning neural network package 385 [? ] was used due to its ease of implementation, extensive support packages, and robust 386 performance in object detection tasks. The choice of YOLOv8 allows for efficient processing 387 and accurate detection, making it suitable for real-time applications. 388 The neural network was implemented using the Python programming language, 389 selected for its modular structure and ease of use. Although Python may not be the most 390 high-performance language in terms of computational speed, it provides considerable 391 flexibility and demonstrates sufficient efficiency for tasks involving single-frame image 392 capture and object detection. This makes it suitable for proof-of-principle applications. 393 Figure 8. Flowchart of part classification and sorting processes using camera-based recognition. The process begins with the system initialization, indicated by the Start block. Once started, the system moves to the camera position, where it waits to detect the presence of a part. If no part is detected, the system is retriggered to continue monitoring the arrival of a new item on the conveyor. When an object is detected, the camera captures an image and the AI algorithm within the PC recognizes the object by analyzing the image data. The PLC then assesses the object and categorizes it based on predefined criteria: metal can, plastic bottle, paper, or reject. Based on the identified object, the system follows one of the following steps: 1. Metal Can Identification: If the detected object is identified as a metal can, the PLC sends a command to the stepper motor, which then positions the object correctly and pushes it off the conveyor into the designated can bin. 2. Plastic Bottle Identification: If the detected object is identified as a plastic bottle, the PLC instructs the stepper motor to push the object into the bottle bin. 3. Paper Identification: If the part is identified as paper, the PLC sends a command to the stepper motor, enabling the object to move toward the paper bin located at the end of the conveyor. In each scenario, the stepper motor facilitates the precise mechanical relocation of de- tected items to their designated containers, ensuring accurate categorization. Components that do not conform to the predefined categories of metal, plastic, or paper are classified as rejects. Subsequently, the system directs these unidentifiable items to a dedicated reject
  • 12. Appl. Sci. 2025, 15, 1550 12 of 26 container. This procedural framework improves the efficiency of the automation process, with the PLC managing the logical operations and the stepper motor executing the sorting tasks based on object classification. 3. Deep Learning Model and Programming Framework This section describes the deep learning model utilized to train the classification system. It provides a detailed account of both the image classification model and the logic control system integrated into the PLC, as well as its application in the physical prototype. The selection of an appropriate platform for the AI-based vision system was a critical challenge. In this project, the Ultralytics YOLOv8 deep learning neural network package [26] was used due to its ease of implementation, extensive support packages, and robust performance in object detection tasks. The choice of YOLOv8 allows for efficient processing and accurate detection, making it suitable for real-time applications. The neural network was implemented using the Python programming language, selected for its modular structure and ease of use. Although Python may not be the most high-performance language in terms of computational speed, it provides considerable flexibility and demonstrates sufficient efficiency for tasks involving single-frame image capture and object detection. This makes it suitable for proof-of-principle applications. The training dataset consisted of approximately 2000 captured images, balanced across three classes: cans, plastic bottles, and paper. These images were collected under diverse environmental conditions, with variations in geometry, orientation, size, and lighting to reflect the diversity of real-world waste sorting scenarios and ensure robust and balanced model performance. These images were used to train the YOLOv8 model, allowing accurate classification of objects within the recycling stream. Figure 9 provides examples of the batch process used for image identification and classification during model training. A batch refers to a subset of training images that are processed through the model, where tasks such as edge detection and bounding-box regression are performed. In Figure 9a, the labels designated by the human annotator (trainer) are presented, whereas Figure 9b illustrates the predicted labels of the model for various objects, including examples of paper. This process enables the model to learn iteratively from smaller data segments, enhancing its accuracy in object detection and classification. (a) (b) Figure 9. Training samples used for object detection. (a) Labeled images annotated by the human trainer, featuring objects such as plastic bottles and crumpled paper in varied conditions. (b) Predicted labels generated by the model.
  • 13. Appl. Sci. 2025, 15, 1550 13 of 26 3.1. Code and Integration with PLC The programming environment used for this project is Python 3.8.1, selected for its stability and compatibility with the Ultralytics YOLOv8 software package. Additionally, the Pylogix library was employed to interface with the Allen-Bradley PLC, ensuring efficient communication between the image processing model and the control system. Detailed Python codes are available in Appendix A. In Figure A1, key libraries such as Ultralytics and cv2 are imported to handle video capture and processing. The first code segment imports the training images, configuring parameters like 50 epochs, an image dimension of 736 × 736 pixels, and a batch size of 16. These settings were adjusted in consideration of the hardware limitations and computational demands of the system. The image capture process is initiated with the command cap.read(), after which the model performs real-time inference using the function model(image, stream=True) to detect and classify objects within the captured frames. Figure A2 displays the second segment of the source code, highlighting its incorpo- ration into the Pylogix library. This library substantially facilitates the communication processes with the PLC. Within this code snippet, essential tasks are accomplished, begin- ning with the specification of the PLC’s Internet Protocol (IP) address and encompassing critical operations, such as retrieving tag inventories using the getTagList function, as well as executing commands to write data to tags and read their current values. These integral capabilities collectively enable seamless interaction with the PLC, facilitating the execution of control actions derived from processed data. Consequently, the system achieves the proficiency required for real-time automation and sorting procedures. 3.2. PLC Programming and Control Logic Programming of the PLC is performed using ladder logic within the RS Logix 5000 v20 software. Several software functions are used, including examining input ports, mov- ing commands, timers and counters, internal registers, subroutines, and output ports. The program includes test switches and physical button ports for maintenance and testing purposes. The system can be reprogrammed to adjust the end positions by modifying the timer values. Additionally, new parts can be seamlessly integrated by duplicating rungs in the subroutine without requiring significant changes to the main routine. Bypasses for di- agnostics and testing are also included, allowing the process to be controlled through HMI buttons or physical switches. The primary function of the program is structured around a main routine, illustrated in Figure 10, which performs the following checks and processes: 1. The start button is pressed; 2. The conveyor moves forward for a predefined duration (T1); 3. The camera detects the presence and type of part; 4. The program triggers the appropriate subroutine; 5. The subroutine loads a preset timer according to the detected part; 6. The motor operates once activated, triggered by the T2 timer; 7. The part is pushed by the actuator; 8. All timers are reset, and the program repeats the cycle; 9. If no parts are detected, the program is stopped until the operator resets or restarts it. The subroutine depicted in Figure 11 plays a crucial role in acquiring the input data from the Ethernet network. Subsequently, it loads one of three predetermined values onto a timer. These preset values dictate the distance the conveyor travels before coming to a halt. To maintain uninterrupted operation as components are introduced into the system, latching logic is used. This approach simultaneously isolates output connected to previously sorted components, thus maintaining precise sorting into the designated bins.
  • 14. Appl. Sci. 2025, 15, 1550 14 of 26 MainRoutine - Ladder Diagram GS:MainTask:MainProgram 4/21/2024 2:52:15 PM Total number of rungs in routine: 12 C:RSLogix 5000ProjectsGS.ACD 0 PB1_START <Local:1:I.Data.0> HMISTART OK PB2_STOP <Local:1:I.Data.1> / HMISTOP L OK 1 OK / LatchTime T1.EN / T1.DN EN DN Timer On Delay Timer T1 Preset 5000 Accum 0 TON 2 T1.EN / T1.DN / T3.DN T3.EN MOTOR_FWD <Local:2:O.Data.7> Local:2:O.Data.15 3 T1.DN CAPIN 4 START SS1 <Local:1:I.Data.4> L Exec 5 Exec / T4.EN Jump To Subroutine Routine Name excecute JSR 6 T4.EN / T4.DN Local:2:O.Data.3 7 T3.DN L LatchTime 8 LatchTime EN DN Timer On Delay Timer T4 Preset 15000 Accum 0 TON CAN / PAPER Local:2:O.Data.1 Local:2:O.Data.11 / CAN / PAPER Local:2:O.Data.2 Local:2:O.Data.10 9 / PB2_STOP <Local:1:I.Data.1> T3.DN U Exec RES T1 RES T3 RSLogix 5000 10 T4.DN U CAN U PAPER U LatchTime RES T4 11 HMISTOP / PB2_STOP <Local:1:I.Data.1> U OK (End) Figure 10. Ladder diagram of the main routine illustrating conveyor movement, part detection, and actuator control within the automated sorting system. Paper-based items are allowed to pass directly beyond the actuator and proceed into a designated container specifically designed for paper, located at the end of the conveyor belt system. On the other hand, plastic and metallic items are programmed to pause directly underneath the actuator. Upon achieving the correct placement, the actuator is initiated. The stepper motor responds by rotating clockwise or counterclockwise, depending on whether a metal can or plastic bottle has been identified, thus guiding the item into the appropriate collection bin located on the left or right side of the conveyor system. After the sorting mechanism has completed its operation, all associated timers are set back to their initial state, and any latched outputs are subsequently unlatched. The program is designed with built-in test switches and physical button ports to facilitate maintenance and testing operations. The system can be easily reprogrammed to modify the end positions by adjusting the timer values. Additionally, new parts can be seamlessly integrated by duplicating a rung in the subroutine, without requiring signifi- cant modifications to the main routine. This flexibility allows for efficient scalability and adaptability in the sorting process. The program also includes bypasses for diagnostic purposes and algorithm testing, controlled via switches and buttons. As shown in Figure 12a, the process can be operated through the main HMI screen or physical buttons, depending on the user’s preference. The configuration settings of the HMI panel, shown in Figure 12b, allow easy control
  • 15. Appl. Sci. 2025, 15, 1550 15 of 26 and adjustment of the system parameters, ensuring smooth operation and maintenance. The counters integrated into the input rungs track the number of processed parts and display this information on numeric displays, as configured in the HMI panel settings shown in Figure 12a. excecute - Ladder Diagram GS:MainTask:MainProgram 4/21/2024 2:52:27 PM Total number of rungs in routine: 5 C:RSLogix 5000ProjectsGS.ACD RSLogix 5000 0 paper_out SS2 <Local:1:I.Data.5> L PAPER Move Source 22000 Dest T3.PRE 17500 MOV CU DN Count Up Counter PaperCounter Preset 9999 Accum 14 CTU 1 can_out SS3 <Local:1:I.Data.6> L CAN Move Source 17500 Dest T3.PRE 17500 MOV CU DN Count Up Counter CanCounter Preset 9999 Accum 6 CTU 2 plastic_out SS4 <Local:1:I.Data.7> Move Source 17500 Dest T3.PRE 17500 MOV CU DN Count Up Counter PlasticCounter Preset 9999 Accum 7 CTU 3 / SS2 <Local:1:I.Data.5> / SS3 <Local:1:I.Data.6> / SS4 <Local:1:I.Data.7> / T3.EN Move Source 2000 Dest T3.PRE 17500 MOV 4 EN DN Timer On Delay Timer T3 Preset 17500 Accum 0 TON (End) Figure 11. Ladder diagram of the subroutine handling conveyor timing and sorting actions specific to identified parts, including paper, cans, and plastic. (a) (b) Figure 12. (a) Main HMI interface displaying start/stop controls and material counters for paper, plastic, and cans. (b) Configuration panel for system settings, including start roll and actuator timing adjustments. 3.3. System Deployment The physical system was developed to validate the functionality of the classification system, which consists of a conveyor belt powered by an Oriental Motor BMUD60-A2
  • 16. Appl. Sci. 2025, 15, 1550 16 of 26 brushless motor driver manufactured by Oriental Motor Co., Ltd., located in Tokyo, Japan, which facilitates the movement of recycling materials along the conveyor. At the end of the conveyor, a stepper motor is integrated with a flipper mechanism designed to sort bottles and cans. As illustrated in Figure 13, the system operates by advancing items along the conveyor until they are detected by the vision system using a camera. If the system identifies paper, the item proceeds to the end of the conveyor without intervention. In contrast, if a can is detected, the flipper mechanism directs it to the left bin, and if a plastic bottle is identified, it is sorted into the right bin. The flipper mechanism is controlled by a stepper motor managed by an Arduino Uno, which synchronizes the sorting process based on input signals received from the PLC. Figure 13. Experimental setup of the conveyor sorter, illustrating the conveyor belt, sorting mecha- nism, and bins designated for the recycling of cans, plastic bottles, and paper. This setup provides a simplified yet effective platform for validating the performance of the vision-based classification system, ensuring reliable operation without introducing unnecessary complexity into the physical design. The system is currently configured to classify three types of waste, i.e., paper, cans, and bottles, while offering scalability to include additional recycling categories by incorporating more conveyors and actuators as needed. 4. Performance and Evaluation Results The prototype was evaluated using three categories of trained objects: plastic bottles, metal cans, and paper. The control system, which integrates the YOLOv8 deep learning algorithm for object detection, a PLC for control, and a stepper motor with a conveyor for actuation, demonstrated consistent and reliable performance. Using the YOLOv8 algorithm, the system achieved precise and rapid detection and classification of objects with minimal processing delays, ensuring uninterrupted and efficient sorting operations. Objects were sorted with high accuracy and minimal false readings, validating the prototype’s ability to handle real-time waste sorting. The system also maintained its effectiveness during continuous operation, demonstrating its ability to sustain high-throughput performance.
  • 17. Appl. Sci. 2025, 15, 1550 17 of 26 The results highlight the functionality of the prototype and its scalability for faster sorting operations and adaptability to additional object categories, such as organic waste. The integration of the YOLOv8 deep learning algorithm with the PLC was pivotal in achieving these results. While the YOLOv8 algorithm ensured accurate and efficient object classification, the PLC seamlessly managed the conveyor, motor, and actuator, enabling smooth and reliable sorting based on classification results. This real-time coordination directed each identified object to its designated category, plastic, metal, or paper, with- out disrupting conveyor flow. The continuous sorting capability underscores the potential of the system for industrial-scale implementation. The system architecture also supports scalability, allowing for the classification of new material categories by retraining the YOLOv8 algorithm and making minor adjustments to the PLC programming. This adaptability is critical to address the evolving requirements of waste management facilities as recycling standards and regulations continue to advance. The performance of the YOLOv8 deep learning algorithm was rigorously evaluated using several metrics to assess its classification accuracy and operational robustness. These metrics included the confusion matrix, which provided insight into prediction errors, and precision and recall, which quantified the quality of positive class predictions. Further- more, the Mean Average Precision (mAP) was calculated to evaluate the overall predictive performance at varying thresholds. Training and validation losses, including box loss, classification loss, and Distribution Focal Loss (DFL), were systematically analyzed to optimize the algorithm’s performance. These metrics demonstrated the high precision, scalability, and effectiveness of the YOLOv8 algorithm in real-time waste sorting scenarios. The subsequent sections provide a detailed discussion and analysis of these findings. 4.1. Confusion Matrix Figure 14 presents the confusion matrix used to evaluate the classification performance of the model. In this matrix, each row corresponds to the true classes, while each column corresponds to the classes predicted by the model. The scores, ranging from 0 to 1, provide a quantitative measure of accuracy. The model achieved an accuracy of 88% for metal cans, 75% for paper, and 91% for plastic bottles, demonstrating strong performance, particularly with plastics. Although the results align with the project objectives, further improvements are expected by introducing a black background on the conveyor belt to reduce noise and minimize errors caused by random objects, improving overall classification accuracy. Cans Paper Plastic bottle Background True Label Cans Paper Plastic bottle Background Predicted Label 0.88 0.00 0.00 0.08 0.04 0.75 0.09 0.90 0.01 0.00 0.91 0.04 0.07 0.25 0.00 0.00 Confusion Matrix 0.0 0.2 0.4 0.6 0.8 Figure 14. Confusion matrix displaying the classification results for cans, paper, plastic bottles, and background, with camera-based recognition.
  • 18. Appl. Sci. 2025, 15, 1550 18 of 26 4.2. Training and Validation Box Loss The loss in the training and validation box measures error associated with the predicted bounding boxes compared to the ground-truth bounding boxes (i.e., human-annotated correct boxes). During labeling, the user annotates the object by drawing a square with the corresponding label, while the ML model generates bounding boxes based on its understanding of the image. These metrics are crucial during the training and validation phases of ML models, as they quantify how well the predicted bounding boxes align with the ground-truth bounding boxes for objects in an image, particularly in tasks aimed at accurately locating and classifying objects. Figure 15 reflects the overlapping errors between the predicted and actual bounding boxes. The box loss demonstrates an ideal linear decrease, indicating improved accuracy in identifying bounded boxes on objects. A box loss of 0.6 is generally considered acceptable for multiclass detection models, highlighting the model’s effective performance in locating objects within images. Training: Box Loss Validation: Box Loss Validation: Class Loss Training: Class Loss Training: Distribution Focal Loss (DFL) Validation: Distribution Focal Loss (DFL) Precision Recall Mean Average Precision (mAP50 Score) Mean Average Precision (mAP50-95 Score) 0 20 40 0 20 40 0 20 40 0 20 40 0 20 40 0 20 40 0 20 40 0 20 40 0 20 40 0 20 40 1.8 1.6 1.4 1.2 1.0 4 3 2 1 2.75 2.50 2.25 2.00 1.75 1.50 0.8 0.6 0.4 0.2 0.6 0.4 0.2 1.1 1.0 0.9 0.8 0.7 0.6 3.0 2.5 2.0 1.5 1.0 0.5 1.6 1.5 1.4 1.3 1.2 0.8 0.6 0.4 0.2 0.8 0.6 0.4 0.2 Figure 15. Training and validation metrics over 50 epochs, illustrating the progression of box loss, classification loss, DFL, precision, recall, and mAP metrics for model evaluation and optimization. 4.3. Training and Validation Class Loss The training and validation class loss measures the accuracy of the model in assigning objects to their correct categories during the training and validation phases. In object detec- tion and classification tasks like YOLOv8, class loss is a crucial metric that quantifies the error between predicted class probabilities and ground-truth class labels within bounding boxes, evaluating the model’s ability to correctly classify detected objects. Figure 15 shows that the class loss follows a logarithmic decline, stabilizing around 0.5. The relatively high class loss is probably due to background confusion in the paper class. However, this loss is expected to decrease with further training and additional epochs, as the YOLOv8 model continues to refine its classification accuracy. 4.4. Training and Validation DFL Loss The DFL is a specialized loss function used in object detection models to improve the regression of the bounding box by refining the distribution of predicted values. It measures the error associated with key point detection or regression during training and validation, with a focus on improving the model’s ability to predict accurate bounding boxes. DFL is particularly effective for difficult-to-classify examples and incorporates the com- plete Intersection over Union (CIoU) metric to assess the overlap and alignment between predicted and ground-truth bounding boxes. A DFL loss of 1.2, as shown in Figure 15, is con-
  • 19. Appl. Sci. 2025, 15, 1550 19 of 26 sidered acceptable, especially for complex object shapes like flat, crumpled, or book-bound paper. This loss function ensures robust and precise localization in challenging scenarios. 4.5. Precision Precision quantifies the proportion of true positive predictions (i.e., correct detections) relative to the total positive predictions (including both true positives and false positives). Higher precision indicates that the algorithm is yielding more relevant and accurate results. As illustrated in Figure 15, the precision metric reaches a maximum value of approxi- mately 90%, indicating a high level of precision in detecting objects within the dataset. Increasing the number of training epochs is unlikely to improve this metric further and could potentially lead to overfitting, where the model becomes overly specialized to the training data, limiting its ability to generalize to new unseen images. 4.6. Recall Recall is a performance metric that evaluates how effectively a model identifies true positives (i.e., correct detections) out of all actual ground-truth objects in a dataset. It measures the model’s ability to detect all relevant objects and avoid false negatives (i.e., missed detections), quantifying the proportion of true positive predictions relative to all ground-truth objects. As shown in Figure 15, the recall metric reaches a maximum value of approximately 80%. However, additional training epochs are unlikely to improve this score and may lead to overtraining, impairing the model’s ability to generalize to unseen data. This underscores the need to balance training to optimize recall while preserving robust performance in new datasets. 4.7. Mean Average Precision (mAP50 and mAP50-95) The mAP score is a crucial metric to evaluate the overall performance of an object detec- tion model, as it measures how well the model detects objects by calculating the Intersection over Union (IoU), which assesses the overlap between the predicted and ground-truth bounding boxes. The mAP50 metric evaluates a model’s ability to correctly detect objects at a more lenient overlap criterion (0.5 IoU), meaning it prioritizes detecting objects but does not necessarily require high localization precision. Figure 15 illustrates the mAP50 score, using a 50% overlap threshold, showing that the model reached approximately 86%. This score is considered adequate for object identification, even when the boundaries are not precisely matched. This high mAP50 score signifies the robust object identification capability of the model, despite the imperfect alignment of the boundaries. The mAP50-95 metric is widely used for evaluating object detection models, providing a more comprehensive evaluation by calculating the IoU over a broad range, specifically from 0.5 to 0.95 in increments of 0.05. This method provides a comprehensive examination of the model’s ability to precisely detect objects with different degrees of overlap, thus allowing for a more rigorous evaluation of detection and localization accuracy. By assessing multiple IoU thresholds, the metric gives a holistic view of the robustness of the model under various detection conditions. In this case, the model achieved an mAP50-95 score of approximately 70%, indicating strong detection performance across the evaluated IoU thresholds. The equation for calculating the mAP is as follows: mAP = 1 k k ∑ i=1 APi (1) where:
  • 20. Appl. Sci. 2025, 15, 1550 20 of 26 • mAP: Mean Average Precision, a single scalar value that summarizes the model’s detection precision across classes and IoU thresholds. • k: The total number of IoU thresholds (or classes) over which the mAP is averaged. For mAP50-95, k represents multiple thresholds from 0.5 to 0.95. • APi: Average Precision for the i-th threshold, which is calculated as the area under the Precision-Recall (PR) curve for that specific IoU threshold. The equation computes mAP by averaging precision scores across specified IoU thresholds, providing a comprehensive measure of object detection performance and reflecting both high-confidence detections and adaptability to varying object overlaps. 5. Discussion The performance of the proposed PLC-Controlled Intelligent Conveyor System with AI-enhanced vision, while promising, necessitates a comprehensive evaluation of several critical aspects. The discussion highlights the distinct contributions and innovations of our system, along with its practical implications and scalability potential. This includes its ability to significantly improve the effectiveness of waste management practices, aligned with broader environmental sustainability goals. Following this, reflections on the system’s performance and optimization strategies are examined, where the robustness and adapt- ability of the system are explored in depth, with a special emphasis on its ability to adapt to dynamic operational conditions that extend beyond controlled environments. Subsequently, a comparative analysis with existing waste sorting technologies is presented. This section will offer a concise comparison, underscoring how the system differ- entiates itself in terms of operational efficiency and technological advancement compared to current market solutions. Finally, future extensions are discussed, including potential improvements and outlining ongoing efforts to refine and enhance the functionalities of the system. These efforts are driven by the objectives of technological innovation and environ- mental sustainability, to keep the system at the forefront of waste management technology. 5.1. Contribution and Research Implications The incorporation of the YOLOv8 deep learning model with a PLC into the sorting system marks a notable enhancement in automated waste sorting technology. The results confirm the high precision of the system, achieving classification accuracies of 88% for metal cans and up to 91% for plastic bottles. This shows a crucial improvement in the efficiency of recycling operations. In addition, the system has been proven capable of maintaining this high level of accuracy even in the presence of continuous chaotic waste streams and improperly arranged objects, showcasing its suitability for large-scale industrial waste management applications. The implementation of the system has provided valuable insight into the practical challenges of automated waste sorting. One significant challenge is handling objects overlaps, which can lead to inaccuracies in classification. Although the system demonstrates proficiency in sorting and classifying objects within disordered waste streams, advancing the development of algorithms that can accurately differentiate and categorize overlapping objects is identified as a critical focus for subsequent iterations, which is essential to address complex classification challenges. Such refinements are crucial to improve the robustness and efficiency of the sorting system. Future enhancements will focus on increasing the resilience and reliability of the system in managing diverse operational conditions, ensuring consistent performance in a wider range of waste management scenarios.
  • 21. Appl. Sci. 2025, 15, 1550 21 of 26 5.2. Model Performance and Optimization The model achieved notable performance metrics, including a box loss of 60%, a classi- fication loss of 0.5, a DFL loss of 1.2, an mAP50 of 86%, and an mAP50-95 of 70%. During a live test with a camera, the model demonstrated accurate object classification in various ob- ject orientations. However, additional training could risk overfitting, reducing the ability of the model to generalize to new data. Therefore, hyperparameter tuning becomes essential to improve performance without the need for more epochs. Hyperparameters, including learning rate, batch size, number of epochs, dropout rate, and weight initialization, significantly influence the model’s performance and optimization process. For this study, the hyperparameters were set to the default values recommended by Ultralytics, as altering them would require extensive retraining and comparison, an in- herently time-consuming task. The learning rate, batch size, and weight initialization all contribute uniquely to the optimization of the model. Adjusting the learning rate is critical to preventing premature convergence or the risk of getting trapped in local minima. The batch size affects the stability of gradient updates, affecting the training dynamics, while weight initialization determines the initial setup of model parameters, influencing the efficiency of the learning process. The ability to appropriately fine-tune these settings is crucial for achieving optimal results in real-world applications. 5.3. Comparative Analysis with Existing Systems A comparative analysis is conducted to contextualize the performance of the devel- oped PLC-Controlled Intelligent Conveyor System with AI-enhanced vision against similar systems reported in the literature. This comparison is not intended to provide a thorough review but rather to underscore the contributions of the current work within the field of au- tomated waste sorting technologies. Key performance metrics, such as accuracy, precision, and recall, are commonly employed to evaluate the effectiveness of these systems. In a very recent study, Sayem et al. [36] reported a waste classification system using deep learning that achieved an accuracy of 83.11% across 28 distinct recyclable categories and an mAP50 of 63%, reflecting its effectiveness in object detection tasks. In another study, Al-Mashhadani et al. [37] evaluated several deep learning models for the classification of waste materials, including ResNet50, GoogleNet, InceptionV3, and Xception. Among these, the ResNet50 model exhibited an accuracy of 95%, a precision of 95.4%, a recall of 95%, and an F1 score of 94.8%, which shows its robustness in the classification of waste materials. Feng et al. [11] developed an intelligent waste bin with automatic sorting, introducing GECM-EfficientNet for real-time waste classification. The model achieved 94.54% accuracy on a self-built household waste dataset and 94.23% on the TrashNet dataset. In comparison, the system described in this study demonstrated a classification accu- racy of up to 91% for plastic bottles, a precision of 90%, and a recall of 80%, and achieved an mAP50 of approximately 86%. These results highlight the robust object identification capabilities of the developed system even when the object boundaries are imperfectly aligned. In addition, the integration of an advanced deep learning algorithm in the sys- tem with a PLC facilitates faster processing times and excellent accuracy, highlighting its potential contributions to the advancement of automated waste sorting technologies. This comparison underscores the advances presented in this work, situating it within the broader context of existing automated waste sorting systems. 5.4. Future Extensions The modular design of the system lays a robust foundation for scalability and versatil- ity, facilitating seamless expansion to accommodate larger and more complex operational demands. This design supports straightforward upgrades to the AI deep learning model
  • 22. Appl. Sci. 2025, 15, 1550 22 of 26 and adjustments to the PLC programming, enabling the system to meet diverse waste management requirements. Beyond MSW, the flexibility of the system extends to the classification and sorting of a wide range of recyclables, including organic waste, electronic components, and specialized industrial or chemical materials. The modular design further allows the integration of additional sorting stations customized to bio-based materials or other specific waste streams, paving the way for a comprehensive and versatile waste management solution capable of addressing the evolving demands of recycling industries. To improve the performance efficiency of the system, several optimization strate- gies are being explored. These include the incorporation of advanced machine and deep learning algorithms to improve object recognition and sorting accuracy, particularly un- der challenging conditions such as varying lighting or overlapping objects. Integrating additional sensors, such as infrared or ultrasonic technologies, could provide additional data to refine the identification of materials, improving the precision and reliability of the sorting process. Future extensions will emphasize increasing the system’s ability to manage over- lapping objects, a critical challenge in automated sorting. Advanced image processing algorithms are being investigated to improve segmentation and edge detection, facili- tating more accurate differentiation between stacked or adjacent items. Depth-sensing technologies, such as stereo cameras or LiDAR, are also being considered to provide spa- tial information that can further improve the system’s capacity to handle complex waste configurations. These enhancements aim to refine the sorting process, reduce error rates, and improve operational efficiency in demanding and dynamic environments. Advancements in the model training process are also a key priority, with efforts focused on minimizing classification errors and improving generalization across diverse operational scenarios. These improvements involve expanding the training dataset to include a wider variety of waste types and conditions, as well as employing more rigorous validation techniques to ensure robustness and reliability in real-world applications. Emphasizing its scalability, adaptability, and ongoing optimization, the system is presented as a proactive approach to addressing the issues associated with modern waste management. These planned extensions and improvements will ensure their relevance and effectiveness in addressing a wider spectrum of waste types and recycling needs while simultaneously contributing to the advancement of intelligent sorting technologies. 6. Conclusions This paper outlines the development and successful testing of an AI-driven recycling sorter, marking a significant advancement in automated waste management, by integrating a YOLOv8-based deep learning model with a PLC-controlled mechanical system. The de- veloped prototype demonstrated effective classification of recyclable materials, including plastics, metals, and paper. It achieves a precision rate of 90% and a recall rate of 80%, with a mean Average Precision at 50% threshold (mAP50) of 86%, ensuring rapid and accurate material segregation. This seamless integration ensures reliable real-time functionality, suitable for broad waste management applications. In continuous use, it maintained high precision and reliability, showing low training losses, a box loss of 0.6, and a classification loss of 0.5, effectively handling object overlaps without overfitting. Compared to existing solutions, such as those reviewed in the literature, our system shows excellent performance. For example, compared to the ResNet50 model evaluated by Al-Mashhadani et al. [37], which achieved an accuracy of 95% and an F1 score of 94.8%, our system provides comparable accuracy but with enhanced real-time processing capabilities essential for dynamic waste sorting environments. This represents a clear improvement in
  • 23. Appl. Sci. 2025, 15, 1550 23 of 26 operational efficiency and adaptability, with our system achieving an mAP50 approximately 23% higher than similar AI models discussed in a recent study by Sayem et al. [36]. The integration of advanced AI capabilities not only mitigates the limitations of tra- ditional sorting methods but also enhances the precision and throughput of the sorting. Unlike traditional methods that struggle with mixed waste and require extensive human intervention, our AI-enhanced system reduces labor costs, minimizes human error, and im- proves overall operational efficiency. This reduction in dependence on manual labor and the improvement of material quality are crucial as global waste volumes continue to challenge the urban and industrial sectors. Looking ahead, the system’s scalability and adaptability will enable compliance with stringent environmental and regulatory requirements, adjusting efficiently to diverse waste categories and increasingly intricate waste management contexts. This AI-driven platform plays an instrumental role in the promotion of sustainable waste management, minimizing environmental impacts, and serving the principle of the circular economy. It stands out not only for its technical capabilities but also for its significant contributions to resource conservation and environmental preservation. In conclusion, this AI-powered sorting system not only showcases the technical ca- pabilities and significant impacts of AI in waste sorting but also sets new standards for sustainable waste management solutions aligned with evolving regulatory and environ- mental standards. Author Contributions: Conceptualization, V.R., M.S., A.N., C.K. and N.R.; methodology, N.A. and N.R.; software, V.R., M.S., A.N. and C.K.; validation, N.A., M.R., H.E. and N.R.; formal analysis, N.A., V.R., M.S., A.N., C.K. and N.R.; investigation, V.R., M.S., A.N. and C.K.; resources, N.R.; data curation, V.R., M.S., A.N., C.K. and N.R.; writing—original draft preparation, N.A., V.R., M.S., A.N., C.K. and N.R.; writing—review and editing, N.A., M.R., H.E. and N.R.; visualization, N.A., V.R., M.S., A.N., C.K., M.R., H.E. and N.R.; supervision, N.A. and N.R.; project administration, N.R.; funding acquisition, N.A. and N.R. All authors have read and agreed to the published version of the manuscript. Funding: This work was supported by the mechatronics laboratories at the College of Computing, Department of Applied Computing, Michigan Technological University. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data are available upon request from the corresponding authors. Acknowledgments: This research was conducted in the Advanced Programmable Logic Controllers Laboratory within the Department of Applied Computing at the College of Computing, Michigan Technological University. Previous published projects in the same educational settings include gripper control [38,39], robot arm control and interfacing [40–42], and supervisory control and quality assurance [43,44]. Conflicts of Interest: The authors declare no conflicts of interest. Abbreviations The following abbreviations are used in this manuscript: AI Artificial Intelligence IP Internet Protocol ANN Artificial Neural Network mAP Mean Average Precision CIoU Complete Intersection over Union ML Machine Learning CNN Convolutional Neural Network MSW Municipal Solid Waste DFL Distribution Focal Loss PC Personal Computer GDP Gross Domestic Product PLC Programmable Logic Controller HMI Human–Machine Interface SDG Sustainable Development Goals
  • 24. Appl. Sci. 2025, 15, 1550 24 of 26 IoU Intersection over Union YOLO You Only Look Once Appendix A ANN Artificial Neural Network mAP Mean Average Precision CIoU Complete Intersection over Union ML Machine Learning CNN Convolutional Neural Network MSW Municipal Solid Waste DFL Distribution Focal Loss PC Personal Computer GDP Gross Domestic Product PLC Programmable Logic Controller HMI Human-Machine Interface SDG Sustainable Development Goals IoU Intersection over Union YOLO You Only Look Once Appendix A from u l t r a l y t i c s import YOLO import cv2 import cvzone import math import time # # Training code . # i f __name__ == ’ __main__ ’ : # model = YOLO( ’ yolov8n . pt ’) # # model = YOLO( ’COCO. yaml ’) # model . t r a i n ( data =" config . yaml " , epochs =50 , imgsz=736 , batch =36) # # model . t r a i n ( data =" config . yaml " , epochs =50 , imgsz =1024 , batch =16) cap = cv2 . VideoCapture ( 0 ) # For Webcam cap . set (3 , 1280) cap . set (4 , 720) # cap = cv2 . VideoCapture ( " . . / Videos/motorbikes .mp4" ) # For Video model = YOLO( "C:/ Users/906Au/PycharmProjects/RecycleProg/best . pt " ) classNames = [ " Cans " , " paper − v1 2023−02−04 11−49pm" , " p l a s t i c b o t t l e " ] prev_frame_time = 0 new_frame_time = 0 while True : new_frame_time = time . time ( ) success , img = cap . read ( ) r e s u l t s = model (img , stream=True ) Figure A1. Code implementation for importing essential libraries and loading training images, including Ultralytics for model integration and cv2 for video capture. from pylogix import PLC with PLC( ) as comm: comm. IPAddress = ’ 1 9 2 . 0 . 0 . 2 0 6 ’ # ret = comm. Read ( ’ paper_out ’) # print ( ret . TagName, r et . Value , re t . Status ) while ( True ) : tags = comm. GetTagList ( ) for t in tags . Value : print ( "Tag : " , t . TagName, t . DataType ) ret = comm. Write ( " Program : MainProgram . paper_out " , 2) # print ( ret . TagName, re t . Value , r et . Status ) ret = comm. Read ( " Program : MainProgram . Test " ) print ( ret . TagName, r et . Value , r e t . Status ) Figure A2. Code showcasing Pylogix integration for PLC communication, including IP setup and tag operations (retrieve, write, read). Figure A1. Code implementation for importing essential libraries and loading training images, including Ultralytics for model integration and cv2 for video capture. ANN Artificial Neural Network mAP Mean Average Precision CIoU Complete Intersection over Union ML Machine Learning CNN Convolutional Neural Network MSW Municipal Solid Waste DFL Distribution Focal Loss PC Personal Computer GDP Gross Domestic Product PLC Programmable Logic Controller HMI Human-Machine Interface SDG Sustainable Development Goals IoU Intersection over Union YOLO You Only Look Once Appendix A from u l t r a l y t i c s import YOLO import cv2 import cvzone import math import time # # Training code . # i f __name__ == ’ __main__ ’ : # model = YOLO( ’ yolov8n . pt ’) # # model = YOLO( ’COCO. yaml ’) # model . t r a i n ( data =" config . yaml " , epochs =50 , imgsz=736 , batch =36) # # model . t r a i n ( data =" config . yaml " , epochs =50 , imgsz =1024 , batch =16) cap = cv2 . VideoCapture ( 0 ) # For Webcam cap . set (3 , 1280) cap . set (4 , 720) # cap = cv2 . VideoCapture ( " . . / Videos/motorbikes .mp4" ) # For Video model = YOLO( "C:/ Users/906Au/PycharmProjects/RecycleProg/best . pt " ) classNames = [ " Cans " , " paper − v1 2023−02−04 11−49pm" , " p l a s t i c b o t t l e " ] prev_frame_time = 0 new_frame_time = 0 while True : new_frame_time = time . time ( ) success , img = cap . read ( ) r e s u l t s = model (img , stream=True ) Figure A1. Code implementation for importing essential libraries and loading training images, including Ultralytics for model integration and cv2 for video capture. from pylogix import PLC with PLC( ) as comm: comm. IPAddress = ’ 1 9 2 . 0 . 0 . 2 0 6 ’ # ret = comm. Read ( ’ paper_out ’) # print ( ret . TagName, ret . Value , ret . Status ) while ( True ) : tags = comm. GetTagList ( ) for t in tags . Value : print ( "Tag : " , t . TagName, t . DataType ) ret = comm. Write ( " Program : MainProgram . paper_out " , 2) # print ( ret . TagName, ret . Value , ret . Status ) ret = comm. Read ( " Program : MainProgram . Test " ) print ( ret . TagName, ret . Value , ret . Status ) Figure A2. Code showcasing Pylogix integration for PLC communication, including IP setup and tag operations (retrieve, write, read). Figure A2. Code showcasing Pylogix integration for PLC communication, including IP setup and tag operations (retrieve, write, read). References 1. UN Environment Programme & ISWA. Global Waste Management Outlook 2024—Beyond an Age of Waste: Turning Rubbish into a Resource. Technical Report, United Nations Environment Programme. 2024. Available online: https://wedocs.unep.org/ 20.500.11822/44939 (accessed on 9 October 2024). 2. Singh, M. Solid waste management in urban India: imperatives for improvement. J. Contemp. Issues Bus. Gov. 2019, 25, 87–92. 3. MacArthur, E. Towards the circular economy. J. Ind. Ecol. 2013, 2, 23–44. 4. Kirchherr, J.; Reike, D.; Hekkert, M. Conceptualizing the circular economy: An analysis of 114 definitions. Resour. Conserv. Recycl. 2017, 127, 221–232. [CrossRef] 5. Material Economics. The Circular Economy—A Powerful Force for Climate Mitigation; Material Economics Sverige AB: Stockholm, Sweden, 2018.
  • 25. Appl. Sci. 2025, 15, 1550 25 of 26 6. Bozma, H.I.; Yalçın, H. Visual processing and classification of items on a moving conveyor: A selective perception approach. Robot.-Comput.-Integr. Manuf. 2002, 18, 125–133. [CrossRef] 7. Kazemi, S.; Kharrati, H. Visual Processing and Classification of Items on Moving Conveyor with Pick and Place Robot using PLC. Intell. Ind. Syst. 2017, 3, 15–21. [CrossRef] 8. Wilts, H.; Garcia, B.R.; Garlito, R.G.; Gómez, L.S.; Prieto, E.G. Artificial intelligence in the sorting of municipal waste as an enabler of the circular economy. Resources 2021, 10, 28. [CrossRef] 9. Lubongo, C.; Bin Daej, M.A.; Alexandridis, P. Recent developments in technology for sorting plastic for recycling: The emergence of artificial intelligence and the rise of the robots. Recycling 2024, 9, 59. [CrossRef] 10. Cheng, T.; Kojima, D.; Hu, H.; Onoda, H.; Pandyaswargo, A.H. Optimizing Waste Sorting for Sustainability: An AI-Powered Robotic Solution for Beverage Container Recycling. Sustainability 2024, 16, 10155. [CrossRef] 11. Feng, Z.; Yang, J.; Chen, L.; Chen, Z.; Li, L. An intelligent waste-sorting and recycling device based on improved EfficientNet. Int. J. Environ. Res. Public Health 2022, 19, 15987. [CrossRef] [PubMed] 12. Nwokediegwu, Z.Q.S.; Ugwuanyi, E.D.; Dada, M.A.; Majemite, M.T.; Obaigbena, A. AI-driven waste management systems: A comparative review of innovations in the USA and Africa. Eng. Sci. Technol. J. 2024, 5, 507–516. [CrossRef] 13. Strollo, E.; Sansonetti, G.; Mayer, M.C.; Limongelli, C.; Micarelli, A. An AI-Based approach to automatic waste sorting. In Proceedings of the HCI International 2020-Posters: 22nd International Conference, HCII 2020, Copenhagen, Denmark, 19–24 July 2020; Proceedings, Part I 22; Springer International Publishing: Cham, Switzerland, 2020; pp. 662–669. 14. Jacobsen, R.M.; Johansen, P.S.; Bysted, L.B.L.; Skov, M.B. Waste wizard: Exploring waste sorting using AI in public spaces. In Proceedings of the 11th Nordic Conference on Human–Computer Interaction: Shaping Experiences, Shaping Society (NordiCHI ’20), Tallinn, Estonia, 2020; pp. 1–11. 15. Mohammed, M.A.; Abdulhasan, M.J.; Kumar, N.M.; Abdulkareem, K.H.; Mostafa, S.A.; Maashi, M.S.; Khalid, L.S.; Abdulaali, H.S.; Chopra, S.S. Automated waste-sorting and recycling classification using artificial neural network and features fusion: A digital-enabled circular economy vision for smart cities. Multimed. Tools Appl. 2023, 82, 39617–39632. [CrossRef] [PubMed] 16. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [CrossRef] 17. Shennib, F.; Schmitt, K. Data-driven technologies and artificial intelligence in circular economy and waste management systems: a review. In Proceedings of the 2021 IEEE International Symposium on Technology and Society (ISTAS), Waterloo, ON, Canada, 28–31 October 2021; pp. 1–5. 18. Melinte, D.O.; Travediu, A.M.; Dumitriu, D.N. Deep convolutional neural networks object detector for real-time waste identifica- tion. Appl. Sci. 2020, 10, 7301. [CrossRef] 19. Wu, N.; Wang, G.; Jia, D. A Hybrid Model for Household Waste Sorting (HWS) Based on an Ensemble of Convolutional Neural Networks. Sustainability 2024, 16, 6500. [CrossRef] 20. Chauhan, R.; Shighra, S.; Madkhali, H.; Nguyen, L.; Prasad, M. Efficient Future Waste Management: A Learning-Based Approach with Deep Neural Networks for Smart System (LADS). Appl. Sci. 2023, 13, 4140. [CrossRef] 21. Abuhejleh, A.A.; Alafeshat, M.Z.; Almtireen, N.; Elmoaqet, H.; Ryalat, M.; AlAjlouni, M.M. Recyclable Waste Categorization with Transfer Learning. In Proceedings of the 2024 22nd International Conference on Research and Education in Mechatronics (REM), Amman, Jordan, 24–26 September 2024; pp. 343–348. 22. Huang, K.; Lei, H.; Jiao, Z.; Zhong, Z. Recycling Waste Classification Using Vision Transformer on Portable Device. Sustainability 2021, 13, 11572. [CrossRef] 23. Bhattacharya, S.; Kumar, A.; Krishav, K.; Panda, S.; Vidhyapathi, C.; Sundar, S.; Karthikeyan, B. Self-Adaptive Waste Management System: Utilizing Convolutional Neural Networks for Real-Time Classification. Eng. Proc. 2024, 62, 5. [CrossRef] 24. Abdu, H.; Noor, M.H.M. A survey on waste detection and classification using deep learning. IEEE Access 2022, 10, 128151–128165. [CrossRef] 25. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. 26. Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLOv8. Ultralytics Documentation 2023. Available online: https://github.com/ ultralytics/ultralytics (accessed on 5 October 2024). 27. Choi, J.; Lim, B.; Yoo, Y. Advancing plastic waste classification and recycling efficiency: Integrating image sensors and deep learning algorithms. Appl. Sci. 2023, 13, 10224. [CrossRef] 28. Wen, S.; Yuan, Y.; Chen, J. A vision detection scheme based on deep learning in a waste plastics sorting system. Appl. Sci. 2023, 13, 4634. [CrossRef] 29. Nandre, C.; Yazbec, E.; Urunkar, P.; Motey, S.; Hazaveh, P.; Rawashdeh, N.A. Robot Vision-based Waste Recycling Sorting with PLC as Centralized Controller. In Proceedings of the 2023 15th International Conference on Computer and Automation Engineering (ICCAE), Sydney, Australia, 3–5 March 2023; pp. 381–384.
  • 26. Appl. Sci. 2025, 15, 1550 26 of 26 30. Kokoulin, A.N.; Uzhakov, A.A.; Tur, A.I. The Automated Sorting Methods Modernization of Municipal Solid Waste Processing System. In Proceedings of the 2020 International Russian Automation Conference (RusAutoCon), Sochi, Russia, 6–12 September 2020; pp. 1074–1078. [CrossRef] 31. Zulfiqar, R.; Mehdi, B.; Iftikhar, R.; Khan, T.; Zia, R.; Saud, N. PLC Based Automated Object Sorting System. In Proceedings of the 2019 4th International Electrical Engineering Conference (IEEC 2019), Singapore, 25–28 November 2019. 32. Koskinopoulou, M.; Raptopoulos, F.; Papadopoulos, G.; Mavrakis, N.; Maniadakis, M. Robotic waste sorting technology: Toward a vision-based categorization system for the industrial robotic separation of recyclable waste. IEEE Robot. Autom. Mag. 2021, 28, 50–60. [CrossRef] 33. Wu, Q.; Wang, N.; Fang, H.; He, D. A novel object detection method to facilitate the recycling of waste small electrical and electronic equipment. J. Mater. Cycles Waste Manag. 2023, 25, 2861–2869. [CrossRef] 34. Vinodha, D.; Sangeetha, J.; Sherin, B.C.; Renukadevi, M. Smart garbage system with garbage separation using object detection. Int. J. Res. Eng. Sci. Manag. 2020, 3, 779–782. 35. Wahyutama, A.B.; Hwang, M. YOLO-based object detection for separate collection of recyclables and capacity monitoring of trash bins. Electronics 2022, 11, 1323. [CrossRef] 36. Sayem, F.R.; Islam, M.S.B.; Naznine, M.; Nashbat, M.; Hasan-Zia, M.; Kunju, A.K.A.; Khandakar, A.; Ashraf, A.; Majid, M.E.; Kashem, S.B.A.; et al. Enhancing waste sorting and recycling efficiency: robust deep learning-based approach for classification and detection. Neural Comput. Appl. 2024, 1–17. [CrossRef] 37. Al-Mashhadani, I.B. Waste material classification using performance evaluation of deep learning models. J. Intell. Syst. 2023, 32, 20230064. [CrossRef] 38. Kocher, E.; Ochieze, C.G.; Oumar, A.; Winter, T.; Sergeyev, A.; Gauthier, M.; Rawashdeh, N. A Smart Parallel Gripper Industrial Automation System for Measurement of Gripped Workpiece Thickness. In Proceedings of the 2022 Conference for Industry and Education Collaboration (CIEC), Tempe, AZ, USA, 9–11 February 2022. 39. Valaboju, C.; Amuda, M.; NCh, S.C.; Reddy, S.; Hazaveh, P.K.; Rawashdeh, N.A. A Supervisory Control System for a Mechatronics Assembly Station. In Proceedings of the 15th International Conference on Computer and Automation Engineering (ICCAE 2023), Sydney, Australia, 3–5 March 2023; pp. 503–507. 40. Liu, Z.; Johnston, C.; Leino, A.; Winter, T.; Sergeyev, A.; Gauthier, M.; Rawashdeh, N. An Industrial Pneumatic and Servo Four-axis Robotic Gripper System: Description and Unitronics Ladder Logic Programming. In Proceedings of the 2022 American Society for Engineering Education (ASEE) Conference for Industry and Education Collaboration (CIEC), Tempe, AZ, USA, 4–11 February 2022; pp. 9–11. 41. Reyes, A.; Reinhardt, S.; Wise, T.; Rawashdeh, N.; Paheding, S. Gesture Controlled Collaborative Robot Arm and Lab Kit. In Proceedings of the 2022 American Society for Engineering Education (ASEE) Conference for Industry and Education Collaboration (CIEC), Tempe, AZ, USA, 4–11 February 2022. 42. Piechocki, B.; Spitzner, C.; Karanam, N.; Winter, T.; Sergeyev, A.; Gauthier, M.; Rawashdeh, N. Operation of a Controllable Force-sensing Industrial Pneumatic Parallel Gripper System. In Proceedings of the 2022 American Society for Engineering Education (ASEE) Conference for Industry and Education Collaboration (CIEC), Tempe, AZ, USA, 4–11 February 2022; pp. 9–11. 43. Salameen, L.; Estatiah, A.; Darbisi, S.; Tutunji, T.A.; Rawashdeh, N.A. Interfacing Computing Platforms for Dynamic Control and Identification of an Industrial KUKA Robot Arm. In Proceedings of the 2020 21st International Conference on Research and Education in Mechatronics (REM), Cracow, Poland, 9–11 December 2020; pp. 1–5. [CrossRef] 44. Alghamdi, B.; Lee, D.; Schaeffer, P.; Stuart, J. An Integrated Robotic System: 2D-Vision Based Inspection Robot with Automated PLC Conveyor System. In Proceedings of the 4th International Conference of Control, Dynamic Systems, and Robotics (CDSR’17), Toronto, ON, Canada, 21–23 August 2017; pp. 21–23. Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.