SlideShare a Scribd company logo
International Journal of Advanced Engineering, Management
and Science (IJAEMS)
Peer-Reviewed Journal
ISSN: 2454-1311 | Vol-9, Issue-9; Sep, 2023
Journal Home Page: https://ijaems.com/
Article DOI: https://dx.doi.org/10.22161/ijaems.99.1
This article can be downloaded from here: www.ijaems.com 1
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Data Centre Network Optimization
Tajammul Hussain
itajammuleng@gmail.com
Received: 11 Jul 2023; Received in revised form: 15 Aug 2023; Accepted: 25 Aug 2023; Available online: 03 Sep Aug 2023
Abstract— Technology has advanced quite quickly in recent years, and IT and telecom infrastructure is
constantly growing. Due to an increased importance and emphasis of cloud computing many growing and
emerging enterprises have shifted their computing needs and wants ultimately to the cloud, which leads to
increase in inter-server data trafficking and also the bandwidth required for Data Centre Networking
(DCCN). Actually this multi-tier hierarchical architecture used in modern data centres is based on
traditional Ethernet/fabric switches. Researchers goal was to improve the data centre communication
network design such that the majority of its problems can be solved while still using the existing network
infrastructure and with lower capital expenditures. This is achieved through the deployment of OpenFlow
(OF) switches and the Microelectromechanical Systems (MEMS) based all Optical Switching (MAOS). This
will be beneficial in decreasing network latency, power consumption, and CAPEX costs in addition to helping
with scalability concerns, traffic management, and congestion control. Additionally, by implementing new
virtualization techniques, we may enhance DCCN's resource consumption and cable problems. In order to
address data centre challenges, the researcher came up with an entirely new novel flat data centre
coordination network architecture which is named as “Hybrid Flow based Packet filtering” (HFPFMAOS),
forwarding, and MEMS which is entirely based on optical switching, and will be finally controlled by
Software Defined Network (SDN) controller.
Keywords— Data Centre, Network Optimization, Software defined Networking, OpenFlow
I. INTRODUCTION
1.1 Background
Data Centre is specialized building that houses various IT
resources, such as servers, data storage systems, and
networking and communication devices. Data centres have
become increasingly important in IT and data networks over
the last few years as a result of the explosive expansion of
information technology and internet consumption.
Additionally, as a result of the emergence of cloud
computing, server virtualization, social media, and mobile
data, the number of servers deployed, storage space, and
interconnected network equipment in data centres is
growing tremendously. Data centre IT network's primary
elements include:
• Data Centre Communication Network
• High Performance Computing Network
• Storage Area Network
Three categories of traffic can be used to categorize a Data
Centre.
• Traffic flows which are within the data centre.
• Traffic flows which travel between data centres.
• Traffic flows between data centre and the end user.
Data centre IP traffic is increasing at a fast rate [1] as
depicted in the below figure. The primary driver of this
expansion is cloud computing, which by 2023 is predicted
to account for 4/5 of all traffic in data centres [2].
Fig 1.1: Global Data Centre Traffic, 2016–2021
Global data centres traffic by destination is shown in the
figure below.
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 2
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Fig 1.2: Global Data Centres Traffic by Destination
It is evident from the above chart that the majority of traffic
occurs inside data centres, with the majority of that traffic
travelling between servers and edge switches. Figure below
depicts the traffic forecast between the various tiers of the
data centre communication network.
Fig 1.3 Traffic Forecast at Different Layers
1.2 Problem Statement
Limited server capacity and high oversubscription rates,
network congestion and latency issues, management of
internal data-centre traffic, support for a variety of traffic
patterns, scalability, agility, effective resource utilization,
management, fault handling, and troubleshooting are
among the problems that DCCN must deal with. When
building a data centre, these issues must be taken into
consideration. The problems or difficulties that the DCCN
will encounter are as follows:
• Limited capacity between servers and a high
oversubscription rate as traffic moves through the
layers of switches in a hierarchical fashion.
• Problems with congestion and high network
latency have a negative impact on delay-sensitive
applications and affect network performance as a
whole.
• Controlling the nearly 75% of the overall traffic
that is generated inside the data centre (i.e., east-
west traffic between racks).
• Ease of Management, fault handling and
troubleshooting in the network.
• Support for a range of traffic patterns, such as
long-lasting, high-bandwidth "elephant" flows and
short-lasting persistent "mouse" flows
• As a network grows, difficulties with scalability,
agility, and efficient resource use arise.
1.3 Research Contribution
It relates to developing a low-cost, capital-efficient solution
that will improve the scalability, traffic management,
congestion control, and resource usage of data centre
design. Academic contribution relates to proposing a
solution HFPFMAOS using OpenFlow tools for:
• Congestion Reduction
• Improvement in Network Latency
• Implementing Scalability
• Traffic Management
• Reducing power Consumption
• CAPEX Reduction
Practical contribution of this research relates to coming up
with an innovative data centre topology architecture with
improved performance.
1.4 Objectives
• Application of Hybrid Flow to packet filtering
/Forwarding and MEMS on the basis of an optical
switching solution by utilizing different
virtualization techniques.
• Efficient utilization of data centre network
resources.
o Reducing Congestion
o Improving Network Latency
o Implementing Scalability
o Traffic Management
• Reducing Power Consumption and saving
CAPEX.
• Provisioning of centralized intelligence of whole
network.
1.5 Research Questions
• How to come up with a solution based on
HFPFMAOS and MEMS based all optical
switching?
• How to achieve efficient utilization of data centre
network resources?
• How to reduce congestion, improving network
latency, scalability and traffic management?
• How to reduce power consumption and save
CAPEX?
• How to implement centralized intelligence of
whole network?
• How to implement network virtualization
techniques?
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 3
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
1.6 Research Methodology
OpenFlow (OF) switches were used in the access layer as
TOR switches. OF messages were captured using Wire
shark. Traffic was generated between various hosts (i.e.
server machines) using iperf testing tool. Performance of
the network and the delays were calculated/analyzed. For
clarity, a reference data centre topology was used.
Aggregation of the MEMS switch with conventional switch
was done and connected it to the OpenFlow (OF) switches
which were installed at the access layer. Network traffic was
generated between various hosts and flow delays were
observed. Researcher then directed flows via using MEMS
switch and simultaneously monitored performance of the
network.
1.7 Software’s
The researcher used the following software’s for carrying
out the requisite analysis:
a) MININET
b) Putty
c) Wireshark
d) Oracle VM Virtual Box
e) Miniedit
f) ODL Controller
g) Xming
II. DATA CENTRE (DC)
2.1 Introduction
Data centre is basically an entirely one of its own kind of
building which is invented to contains, supervises, and
provides [3]. A data centre structure includes very unique
building infrastructures, backup plans for power shortages
or unavailability, cooling agents for systems, special
equipment Chester’s, servers, mainframes, and High
performance Computing (HPC) networks to handle variety
of communication data, as well as well-structured
sophisticated cabling, application of various software,
monitoring centers, and well equipped physical security
systems.
Fig. 2.1 Basic Data Centre Diagram
2.2 Evolution
First Data centre phase (called Phase 1.0) began in the
1950s and consisted of computer rooms with mainframe
systems' CPUs and peripheral devices including storage,
terminals, and printers, among other things. These
monolithic software-based centralized systems give users
little control over IT infrastructure and make heavy use of
available resources. Phase 2.0 of data centres begins in the
1980s when client-server application models gain
prominent. Currently, servers have replaced main frame
computers, which are fairly compact and accessible through
client PC-installed apps.
In this phase, servers that carry out specific tasks are
typically installed close to clients rather than the main IT
infrastructure to avoid paying excessive bandwidth costs.
When the internet began to take off in the 1990s and the use
of web-based applications increased, this mandated that
servers be placed centrally in properly constructed data
centres. Thus, the data centre that arises from the mainframe
computer room has gained significance in the 1990s. Fig.
2.2 [3] depicts the timeline of data centre evolution.
Fig. 2.2 Evolution Phases
Phase 3.0 began around the year 2000, and because
technology evolved so quickly at this time, datacenter
space, electricity, and the IT network are beginning to reach
capacity. As a result, it is expensive to expand existing
facilities or to build new ones. According to a 2005 Cisco
IT research, around this time DC networks and servers were
typically only used at 20% of their capacity. Application
silos with discrete sets of servers, networks, and storage
resources were the primary contributors to this problem.
Later, a number of network consolidation projects are
completed that provide new features, enhance resource
utilization, decrease the number of network components
and processes, and improve operational effectiveness.
2.3 Types
The are two in number [4][5]
• Private
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 4
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
• Cloud Based
2.3.1 Private
It is the on-site hardware that offers computing services and
stores data within a local network that is managed by an
organization's IT department. These are owned and run by
small private/public businesses, governmental
organization’s etc.
2.3.2 Cloud Based
Another name used for cloud data centres is co-
location/managed services provider. Co-located Data
Centers were created and maintained to offer specialized
infrastructure and provide various services to external
sources and parties. Major elements that force businesses to
either go for cloud computing or develop their own data
centers are:
• Business market requirements
• Data privacy issues
• The increasing cost of related
equipment/infrastructure
2.4 Data Centre Communication Network (DCCN)
Data services and data transit from the server to clients or
other servers are the major goals of DCCN. These qualities
are now required from the perspectives of dependability,
expansion, and efficiency for data centre communication
networks.
• Availability
• Scalability
• Flexibility
• Efficiency
• Predictability
Ethernet is the most ubiquitous and well-liked DCCN
protocol for data lines, with an interface range ranging from
10Mbps to 100Gbps. However, as the volume of traffic
through the DCCN increased, various constraints compelled
the development and adoption of new virtualization
technologies as well as the optimization of the data centre
network through the use of cutting-edge networking
strategies like MEMS all optical switching and
conventional packet switching.
2.5 DC Tiers
Various Tiers of Data Centres are shown in the figure below
from Tier-1 to Tier-4.
Fig. 2.3 DC Tiers
2.6 Design Factors for Data Centre Network
• The following factors must be taken into account
when planning and deploying DCCN architecture
[3].
• Failure Impact & Application Resilience: All local
and distant users will be unable to access their apps
if DCCN fails.
• • Connectivity between the server and host:
Servers must be linked together using several
redundant Ethernet connections.
• Traffic Direction: In DCCN, the majority of traffic
is between servers inside the data centre.
• Agility: This term refers to the capacity for
assigning any service or application to any server
at any moment in the data centre network while
maintaining sufficient performance isolation and
security amongst various applications.
• Growth Rate: Increasing number of servers,
switches, and switch ports, etc., as customers and
their data traffic increases.
• Application Bandwidth Demand &
Oversubscription in case demand increases.
2.7 DCCN Challenges
Challenges for the Data Centers Communication Network
fields are as follows:
• Due to the fact that router links carry traffic going in
and out of the data centre, the bandwidth available
over these links for server communication across
various data centre areas is constrained [8].
• Network congestion and latency issues. As in a data
centre, where many applications are active and
producing distinct packet flows that move both
within and across the DCCN, a multi-tier hierarchical
structure with numerous hops, there are times when
packet aggregation creates a bandwidth constraint.
As the receiver's buffer space for absorbing packet
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 5
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
flows is running short, the incoming rate from
numerous transmitters surpasses the rate at which it
can handle packet flows. This congestion causes the
delay time to lengthen and the receiver to begin
dropping packets, which has an impact on various
applications, particularly those that depend on
latency, and lowers the network's overall
performance.
• Managing the read/write, backup, and replication
traffic that moves within the data centre due to the
separation of application servers, databases, and
storage, which accounts for over 75% of all traffic
(i.e., east-west traffic between racks). Additionally,
because jobs are distributed across numerous servers
and assigned in parallel processing, DCCN's internal
traffic is increased.
• Support for a range of traffic patterns, such as long-
lasting, high-bandwidth "elephant" flows and short-
lasting persistent "mouse" flows. Examples of
entirely large data producers include particle
accelerators, planes, trains, metros, undergrounds,
self-driving cars, patient data in healthcare centers,
etc. In the year 2014, the Boeing 787 produces 40
terabytes of onboard data every hour, of which 0.5
terabytes per hour is transmitted to a data centre. The
majority of flows within a data centre are typically
mouse flows, while only a few elephant flows
contain the majority of the data. Managing every sort
of traffic at once while keeping the overall network
delay within bounds is therefore a major issue.
• As a network grows, difficulties with scalability,
agility, and efficient resource use arise. In addition to
making fault handling and debugging more
challenging as the number of communication devices
increases, this will also result in an increase in
management overhead bytes on the network.
• As additional devices are added, energy
requirements and consumption will increase.
Currently, in traditional data centers, the top of the
rack (TOR) switch, also known as the fabric
interconnect, is where all of the servers in a rack are
connected. Each rack has a number of servers, either
rack mount servers or chassis-based servers. On one
side, this aggregation layer provides communication
between data centre clusters, allowing packet-based
East-West traffic to pass through, while on the other,
it also provides connectivity with the core network,
allowing North-South traffic to pass. As a result, the
aggregation layer is the layer where problems arise
because 75% of the data center’s east-west traffic
passes through it, causing a bandwidth bottleneck &
an increase in latency.
2.8 DCCN Topologies
The architecture of a data centre communication network
(DCCN), which serves as the foundation for its
applications, must be scalable, reliable, latency-free, and
have adequate capacity to prevent network congestion.
These qualities heavily rely on the network architecture in
which the DCCN is installed and are crucial to the overall
effectiveness of the data centre. Figure below depicts the
Common DCCN Architecture, as stated by Cisco [9].
Fig 2.4 Common DCCN Architecture
Layered architecture has the advantage of enhancing the
flexibility, modularity, and resilience of networks. Every
layer in this design performs a separate role for unique
profiles, such as the routers in the core layer (i.e., the
aggregation router and the border router), which use routing
protocols to decide how to forward traffic for both ingress
and egress. Core switches, which offer incredibly flexible,
scalable connections with numerous aggregation layer
switches, make up the Layer 2 domain. This layer serves as
a default gateway and a focal point for server IP subnets.
Between numerous aggregation layer switches, it is
typically used to transfer traffic between servers, and
stateful network devices such server load balancers and
firewalls are attached to this layer. The switches that make
up the access layer are typically used by servers to connect
to and communicate with one another. It contributes to
better network administration and is in charge of
exchanging any kind of traffic between servers, including
broadcast, multicast, and unicast.
The bottom of the above illustration shows a server layer
stacked in server racks.
Numerous Virtual machines are being operated by these
servers and assigned to various data centre applications. To
manage and respond to requests from external clients
arriving through the internet, an application is typically
linked to several public IP addresses. These requests are
split up among the pool of servers for processing by
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 6
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
specialized equipment called a load balancer that is
redundantly connected to the core switches [10].
2.8.1 Fat-Tree
Al-Fareset and colleagues [11] have proposed this DCCN
architecture. It makes use of the fat-tree idea and aims to
boost fault tolerance, end-to-end cross sectional bandwidth,
scalability, and cost-effectiveness. With this topology, the
core and several pods make up the entire infrastructure.
Servers, TORS (Access switches), and aggregation
switches make up the pods.
Fig 2.5 Fat-Tree
The topology adapts the data lines in the DCCN architecture
to give a customized IP address scheme and multipath
routing algorithm. Below figure compares 3 tier and fat-tree
systems.
Fig 2.6 Comparison of 3 Tier/Fat-Tree Topologies
2.8.2 B-Cube
The structure [12] is suggested for the modular DCs created
within containers and offers quick installation and
migration, but scalability is limited because container data
centres are not meant to be scaled up. Layers of COTS
(commodity off the shelf) switches and servers make up this
architecture, and they are in charge of packet forwarding.
At level 0, the BCube architecture is made up of many
BCube0 modules, each of which has a switch with n ports
connected to n servers. n switches make up the BCube1
module at level 1, which is connected to n BCube0 modules
or networks. One server from the BCube0 module is linked
to each switch of BCube1.
Fig 2.7 B-Cube Topology
2.8.3 DCell
Guo et al proposed a recursively specified design. Mini
switches and servers are used in this very marketable design
to forward packets. According to below figure, Dcell1 is
made up of n+1 DCell0 modules, each of which is
connected to the others by a single link via its servers.
Fig 2.8 DCell Topology Architecture
2.8.4 VL2
A Fat-Tree based DCCN design called Virtual Layer 2
(VL2) aims to provide a flat automated IP addressing
scheme that makes it possible to install servers anywhere in
the network without manually configuring their IP
addresses. This makes it possible for any service to be
allocated to any network server. In order to scale to a large
server pool, it makes use of [14] address resolution based
on end systems.
Fig 2.9 Network Architecture VL2
In addition to all architectures, there is a need for
architecture which can handle the present issues the DCCN
is encountering as well as give backward compatibility with
existing infrastructure and its architecture. In this thesis, I
suggested the HFPFMOS data centre design, which can
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 7
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
partially address the shortcomings of the existing
architecture while simultaneously being backwards
compatible with it.
2.8.5 HFPFMAOS
The data centre communication network architecture
(DCCN) called HFPFMAOS, is suggested as a solution to
the various problems that data centres in the present face
while maintaining the infrastructure. Particularly,in this
architecture, standard switches are used at the aggregation
layer. By monitoring all of the flows across the OF equipped
switches, this will give us consolidated network intelligence
and enable us to dynamically establish High bandwidth data
paths between the data centre servers whenever and
wherever they are needed. Additionally, it will lower
network management overhead bytes and aid in quick
defect detection and correction. The conceptualized design
is shown below.
Fig 2.10 Architecture – HFPFMAOS - DCCN’s
III. MEMS OPTICAL SWITCHING
3.1 Introduction
Interconnection between DCN demands a significant
capacity increase due to the exponential daily growth of
traffic. Therefore, the best way to relieve network
congestion caused by electronic switched is to use optical
switching. Although optical fibre has a very high
bandwidth, it is constrained by electronic switching and
transmission capacity. As switching is done in the optical
domain, all optical switching can therefore play an
important role.
3.2 OOO Switching (All Optical)
Modifying optical cross connections, a circuit is created
from the ingress node to the egress node in all optical
switching, and data travels via this circuit entirely in optical
form. This can assist in addressing electronic switched
network bottleneck since data is switched in the optical
domain directly rather than going through many optical-
electrical-optical conversions.
3.3 OOO Switching Benefits
• POD-based architecture is typically used in data
centres, which results in low use of computing
resources. However, optical switching enables
sharing of computing resources among several
PODs for optimal effectiveness.
• Increases revenue by quickly deploying new
services.
• Less power is lost compared to a traditional
electrical switch.
• On-demand capacity creation and reallocation.
• A remarkable increase in the data centre's
operational efficiency as a result of the smooth
functioning of applications and the efficient use of
computational resources and 3D optical MEMS
switching technology.
• Boost defect detection and quality of service.
3.4 Technology using OOO Switching
Different optical switching methods [17] are used by many
switches. The same are shown in the below table:
Table 3.1 Contrast of various Optical switching
technologies
3.5 Micro Electro Mechanical Systems (MEMS)
The branch of engineering MEMS (Micro electro
mechanical system) which merged computer technology
with minuscule mechanical components found in
semiconductor chips, including actuators, gears, sensors,
valves, mirrors, and valves [31]. It contains mechanical
components, like micro-mirrors or sensors which h can be
easily adjusted as needed, and reflect optical signals from
input to output fibres placed within a tiny silicon chip that
embedded micro-circuitry. Light beam can be reflected
from the input to output by tilting/rotating the mirror at
varied angles thus directing the traffic to the ports.
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 8
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
3.6 Design and Principle
Below figure shows the Structure of 3D MEMS optical
switch [16].
Fig. 3.1 Optical Switch
Fig 3.2 Standard Format OS
Fig 3.3 A toroidal concave mirror is used in a 3D MEMS
optical switch.
The tilt angle of MEMS mirror measures the incident angle
at which the optical beam from the I/P collimator array
meets the concave mirror after reflecting by the I/P MEMS
mirror array. Then light beam is again converged with a
shift in position "/" by this toroidal concave mirror, that
does an optical Fourier transform. This shift"/" can be
quantitatively expressed as follows using the concave
mirror focal length and the MEMS mirror tilt angle:
/ = f x 2δ
512 ports can be obtained by using a 2X2 array of collimator
arrays with 128 optical ports and MEMS mirror arrays with
128 ports. This makes it easier to fabricate high-capacity
optical switches with a negligibly small cumulative pitch
error.
Fig 3.4 Schematic of 3D 512 MEMS optical ports switch
3.7 MEMS Mirror Structure
The same is shown in the below figure.
Fig 3.5 MEMS tilt mirror cross-sectional schematic
In order to generate an air gap between the electrodes and
the mirrors, substrates are individually produced, then
linked together by flip chips. Electrostatic force is produced
when electrodes and mirrors are connected by a voltage, and
this force moves the mirrors. Therefore, we are able to
adjust the tilt angle of these mirrors by supplying a driving
voltage to each electrode.
3.7.1 Mirror Substrate
They are formed of a single silicon crystal, giving the mirror
incredibly steady movement. MEMS mirror is supported on
X-Axis by two folded torsion springs, and the gimbal ring
is connected to the base on the Y axis by a second pair of
folded torsion springs [16]. The mirror cannot be dragged
down and come into contact with the electrodes because of
the great rigidity of these springs in the Z direction relative
to torsion direction. As a result, moving the mirror on the X
and Y axes makes it simple to reflect an optical beam in 3D
space in a particular direction.
Fig 3.6 High aspect torsion spring and MEMS mirror are
snapped together via SEM
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 9
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
3.7.1.1 Mirror Electrode Substrate Fabrication Method
A layer of polyimide is then spun over the created mirror
pattern in step 7[16] (a) (b). The third process involves dry
etching to create the pattern for the mirror opening (d) and
resistant mask (c) on the opposite side of the bulk Si. The
polyimide coating serves as an etching stopper. Following
the dicing procedure, the polyimide layer is removed using
oxygen plasma (g). "In-process sticking of the mirror" is the
name of the dry procedure used to fabricate mirrors. AS
Since the top surface of the mirror has an au coating, which
gives optical beams high reflectivity, both sides of the
mirror have this coating. Peak-to-valley difference can be
utilized to assess the mirror surface's flatness, which in our
case is 0.05 m, as the optical features of the switch made up
of a MEMS mirror array depend on it.
Fig 3.7 Mirror substrate fabrication process flow diagram
3.7.2 Driving of Electrode Substrate
Mirrors move due to this electrostatic force.
3.7.2.1 Fabrication of Driving Electrode Substrate
Fig 3.8 Electrodes’ SEM snap
Fig 3.9 Driving Electrode Substrate’s Fabrication flow
process
3.8 MEMS Mirror Movement
Figure below shows connection between the voltage applied
and tilt angle of mirror.
Fig 3.10 Relationship between Voltage applied &
Tilt Angle of Mirror+
Size, flatness, fill factor, and scan angle of the micro
mirrors, along with their scan angle and fill factor, all have
an impact on how many ports the 3D MEMS switch can
support.
3.9 Key Advantages of MEMS Switches in DCCN
• They offer any-to-any communication between
servers, allowing for very low latency and the
transfer of enormous amounts of data. which is
much lower than IP switches' latency.
• Can handle any data rate.
• They can circumvent the packet-based aggregation
network and offer direct, high-capacity pure
optical data links between any TOR switches to
reduce network latency and shift sensitive traffic
between servers as needed.
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 10
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
IV. SOFTWARE DEFINED NETWORKING
(SDN)
4.1 Introduction
Adoption of practice of creating and maintaining networks
results in an architecture which is called Software Defined
Networking (SDN). The major aim of this kind of
architecture is to facilitate programmability for the control
plane by separating the devices' that is control plane from
their data/forwarding plane.
4.2 SDN Architecture
Control and Data planes at present coexist on the same
network device, or, to put it another way, all data flow
functions, such as switching, forwarding, and routing, as
well as the various protocols used to make these decisions,
are housed on that one device. "Provide open user-
controlled administration of the forwarding hardware of a
network element," according to the definition of SDN in [7],
is the fundamental objective. The following elements are
part of an architecture based on software defined
networking (SDN) are shown in the below figure:
Fig 4.1: Architecture for Software Defined Networking
(SDN).
4.2.1 Application Layer
Applications can include network characteristics like
forwarding schemes, manageability, and security policies,
among others. With the aid of the controller, the application
layer can abstract the overall view of all network
components and use that data to provide the control layer
the proper direction. In our approach, management
programs that offer CLI/GUI for the network devices are
operating along with network monitoring tools.
4.2.2 Control Plane
It is network's "brain" and is in charge of managing and
programming forwarding plane. The control plane employs
a topology database to give an abstract picture of all the
network components.
4.2.3 Data Plane
The infrastructure layer, which is the lowest layer in the
SDN network architecture and includes forwarding
Network Elements like (Router, Switches). Data
transmission, statistics collection, and information
monitoring are among its primary functions. MEMS
switches are present at the aggregate layer while OF
switches are located at the access.
4.2.4 API’s which are North Bound
These are software interfaces b/w controller's software
modules and SDN applications. The informal name used for
the interface between the application/control plane is called
the Northbound interface.
4.2.5 API’s which are South Bound
South bound APIs are standardized APIs (application
program interfaces) that allow SDN controllers to
communicate with switches and routers and other
forwarding network hardware [18].
4.2.6 East-West Protocols
In a multi-controller-based architecture, these protocols are
used to control how different controllers communicate with
one another. In a broader sense, SDN presents a networking
architecture in which choices regarding the routing of data
traffic are made external to the actual switching hardware.
Therefore, genuine network intelligence can be achieved in
data centres when SDN, PFFS, and MOOOS are applied.
Second, a secure standard protocol should exist to enable
communication between SDN controllers and network
devices. This logical architecture may be implemented
differently by different vendor equipment, but from the
standpoint of an SDN controller, it performs like a
consistent logical switch. AS It is explained that OpenFlow
(OF) is the first and most extensively utilized interface
between the infrastructure layer (data plane) and the control
plane because it satisfies both of these needs.
4.3 Open Flow (OF)
The Open Networking Foundation (ONF) has standardized
OpenFlow as the most popular southbound interface of
SDN architecture. OF's first version, version 1.0, was
created by Stanford University and then acquired by ONF.
Version 1.5 of OF was released in December 2014.
It is an open standard communication protocol that defines
how one or more control servers interact with switches that
are SDN compliant. The flow table entries are installed in
the OF compliance switches by an OF controller so that
traffic is sent in line with these flow entries. These flow
tables can be used by network administrators to alter
network configuration and data traffic patterns.
Additionally, this protocol offers management tools for
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 11
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
packet filtering and topology change control. The OF
protocol is supported by nearly all of the well-known
vendors, including Cisco, HP, IBM, Brocade, and others.
There are two types of switches for OF or SDN compliance.
4.3.1 Open Flow-Only Switches
These switches, which solely support OF operation and
process all packets using the OF pipeline, are also known as
Pure SDN switches or SDN-only switches.
Fig 4.2 Architecture of Open Flow-Only Switch
4.3.2 Hybrid Switches with Open Flow
The same are shown in the below figure.
Fig 4.3 Architecture of Hybrid Switch
4.3.3 Basic Architecture of Open Flow (OF)
The below figure shows the basic OpenFlow architecture.
Fig 4.4 Architecture of Basic Open Flow (OF)
Open Flow switches are deployed in the Access layer as
TORs, which are connected to servers or end hosts on one
end and traditional switches and MEMS switches at the
Aggregation layer on the other. As a control plane, the ODL
controller is set up to connect with the OF switch using the
OF protocol via a secure channel using SSL or TSL
(Transport Layer Security). Three tables make up the OF
switch's logical design. Flow Table, Group Table, Meter
Table, and Table of Groups
4.3.4 Flow Table
It is the fundamental component of logical switch design
that determines the action to be taken on each incoming
packet by comparing it to a specific flow table made up of
numerous flow entries. This packet might go via one or
more flow tables that operate as pipelines. As demonstrated
in fig. 4.5, a controller can add, remove, and alter flow
entries from an OF switch's flow tables either proactively or
reactively (in reaction to an incoming packet).
Fig 4.5 Flow Table Entries Model
Flow entries can’t be generated until something happens, or
after the receipt of the respective packet, thus subsequent
action is taken in compliance with the instructions by a
controller, while in the case of proactive model, flow entries
are generated as required in advance and the process is
completed without checking from the controller. When a
packet first reaches a switch, the switches OF agent
software looks for flow table (ASIC for hardware switches)
and software flow table (virtual switches). This information
exchange is shown in the figure below.
Fig 4.6 Pipeline Processing
Figure below depicts the packet processing flow chart.
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 12
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Fig 4.7 Open Flow(OF)-Packet Processing Flow Chart.
Table 4.1 Main components of Flow Entry
4.3.4.1 Match Felds Values
It is important here that not to pick any packets without
matching field values as shown in the below table:
Table 4.2 Match Field of Flow Entry
4.3.4.2 Priority Allocation
Setting a priority along with Match fields results in a
unique identity of flow entry.
4.3.4.3 Counter Checks
It contains information such as the number of bytes actually
received and number of missed packets too.
4.3.4.4 Directions
Instructions are provided for side by side modifications in
pipeline processing or desired set of actions.
4.3.4.5 Timeouts
It is defined as the maximum time for which any switch
remains idol before which the flow of packets also expires.
4.3.4.6 Cookies
Any opaque data values selected by a relevant controller.
The OF controller can utilize it to filter requests for flow
deletion and modification as well as flow statistics entries.
When processing packets, these values are not used.
4.3.5 Group Table
In order to perform various operations that may have an
impact on several flows, Flow Table may send its flows to
Group Table. A group table is made up of entries for groups
that have group identifiers. The principal elements of a
group entry shown below:
Table 4.3 Group Table
4.3.5.1 Identifier
32-bit unique identifier identifies the group entry.
4.3.5.2 Type of Group
It is used to manage/maintain group types depicting them
as “Required” and “Optional”.
4.3.5.3 Packet Counters
Provide actual packets count which are generated and
processed by a group.
4.3.5.4 Action buckets
AB consists of complete flow of processes which are
performed in a specific pattern for changing packets and
then sending them to a port. This collection of actions is
always carried out as a set of acts. A group entry may have
zero or more buckets, with the exception of groups specified
as "Required: Indirect," which only include one bucket. If a
group doesn't have a bucket, it will immediately throw the
packets into the air.
Fig.4.8 OpenFlow packets pipeline Processing
4.3.6 Meter Table
Having the ability to perform flow-related actions. It is
made up of meter entries that specify per-flow meters.
Personalized Service Code Point based metering also
enables the division of packets into different according to
data rate. Instead of using ports, meters are directly
connected to flow entries. After measuring, they can
influence the overall rate of all packet flows and be
configured to perform a specific action. The key parts of a
meter entry are presented below:
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 13
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Table 4.4 Main components of Meter Entry
4.3.6.1 Identifier of Meter
Meter entry is identified by a 32-bit identifier but in a
distinct way.
4.3.6.2 Meter Bands
Defines how to handle packets by listing rates of each band's
meter bands in a list that is not ordered. Meter Band is used
for measuring the response of the meter towards packets of
various meter rate ranges which are calculated from all
packets received by that particular meter from all inputs.
Meter bands are used to specify. Only one-meter band
processes a packet.
Table 4.5 Meter Band
4.3.6.2.1 Band Types
Packets processing ways.
4.3.6.2.2 Target Rate
It is the required rate of a meter band.
4.3.6.2.3 Burst
It establishes the metre band's granularity.
4.3.6.2.4 Counters
Processing of packets by a meter band updates the
counters.
4.3.6.2.5 Type specific arguments
Parameters which are in essence optional for particular band
types. Meter bands are ranked from 0 to 1, with 1 being the
default band, depending on how much the desired rate
increases. The packet is processed by only one-meter band
when the measured rate exceeds the intended rate.
4.3.6.3 Counters
They are used to figure out how many packets a meter has
processed. In the same flow table, multiple flow entries
may utilize various meters, the same meters, or no meters
at all.
Fig. 4.9 Hierarchical DSCP metering and metres.
4.3.7 Protocols for an Open Flow Channel
Totally safe Open Flow channel is provided by the OF
channel which is typically encrypted by TLS (Transport
Layer Security) or SSL. The protocol that it employs for
these purposes is known as the OF protocol [20].
4.3.7.1 Controller_to_Switch Messages
Messages that the controller initializes to control the
switch's logical state, such as configuration, flow tables,
group tables.
4.3.7.2 Asynchronous Messages
Initiated by the switch, these messages comprise status
updates to inform the controller of changes to the switch's
condition and network events. Additionally, it has a
"Packet-In" message that switches utilize to transfer packets
to OF controllers when their flow tables do not match.
4.3.7.3 Symmetric Messages
Hello messages are initiated by controller/switch, right after
when the connection is made between two specific devices,
or Echo messages are used to measure the latency and
bandwidth of the connection between the switch and
controller as well as to confirm that the device is functioning
as intended.
4.4 Benefits of SDN
SDN has the potential to be a promising technology for
managing and solving different problems in data centre
networks. Using the SDN technique, the network
administrator only needs to declare these settings on a single
place from which all devices are managed and directed by
the SDN controller. Data centres are having scaling
problems, particularly as the number of servers and virtual
machines (VMs) that run on them rises and, later, as the
requirement for migration (VM motion) rises. A significant
bandwidth is needed for virtual machine migration and
updating the MAC address table, which raises network
latency and lowers overall network performance. In typical
data centre architectures, users may encounter interruptions
when accessing apps. Consequently, this researcher
suggests that SDN, "All optical switching," and other
virtualization technologies like OTV can be used to address
this study area/gap.
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 14
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Table 4.6 OpenFlow Messages
V. HFPFMAOS
5.1 Conceptualized HFPFMAOS Solution
As per this research contribution, suggested is the
HFPFMAOS as a remedy for the problems and challenges
facing DCCN (Hybrid Flow based packet filtering,
forwarding & MEMS based all optical switching). Eight
phases make up the implementation of this research
concept:
• The deployment of an SDN controller and an OF
in TOR switches that link to the controller through
SSL.
• Aggregation-level MOOOS plane deployment and
connection to SDN controller.
• Development of database for the conceptualized
network topology.
• Flow based Packet Filtering and Forwarding –
FPFSF
• Monitoring of outgoing ports, computation of
links' usage, and alerting the SDN controller.
• Flow Table lookup and flow entry inspection are
used to find the flows consuming more bandwidth
and to inform the source and destination of those
flows.
• Building high-bandwidth data connections
between TOR switches that correspond to the
source and destination indicated.
• Moving a particular flow of traffic to a high-
bandwidth link constructed using a MOOOS
plane, then moving it back to its normal path and
breaking down the high-bandwidth link when that
particular flow vanishes or resumes operating at its
regular data rate.
5.1.1 Installation of Open Flow in Access and SDN
controller
At initial stages, the respective researcher first deploys an
SDN controller and then turn on an Open Flow in all of the
access layer switches (TOR), without disturbing the
arrangement of the rest of the network's switches and allow
them to work just as before inclusion of this setup. Then I
establish a secure encrypted SSL (secure socket layer)/TCP
connection between each OpenFlow OF enabled Access
switch and the SDN controller. Through the OF protocol,
this link is utilized to communicate between an SDN
controller and an OF switch.
This connectivity is further utilized by SDN controller as
well as by a number of north-bound applications, which
includes initial hello /hi messages (like OFPT HELLO,
FEATURE REQUEST MESSAGE from controller to
switch & then FEATURE REPLY MESSAGE in the same
manner from switch to controller), topology discovery
using LLDP (Link Layer Discovery Protocol) & BDDP
(Broadcast Domain Discovery Protocol), thus start building
data bases, message alerts (PACKET IN to controller &
PACKET OUT) messages from table to switch which
specifically flows over all optical path between OF
switches.
5.1.2 Setting up Plane
At the aggregation level, researcher deployed MAOS
(optical switching based on MEMS) plane alongside
conventional switches. The current packet-based switching
continues to function in this approach together with the
newly proposed MAOS to provide dynamically high speed
data routes between servers for switching elephant traffic as
needed. Plus, centralized control is exercised by connecting
it to a central SDN controller, network elements(NEs)
supervision, and control of traffic flow. Figure below shows
the conceptualized architecture.
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 15
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Fig 5.1 Conceptualized Architecture of HPMOOOS for
DCCN
At the aggregation layer, the MAOS plane is implemented
side by side the conventional switches.
5.1.3 Developing Topology Database
Fig 5.2 Database Topology Development Steps
5.1.3.1 Revelation of OpenFlow OF Switches
Fig 5.3 OpenFlow OF Switches Process
5.1.3.2 Revelation of Active Links
OFDP can detect indirect accessible multi-hop links
(routes) between OpenFlow OF switches by leveraging the
L2 discovery protocol known as BDDP [22]. Most well-
known Open Source controllers, including ODL and
Floodlight, adopt this technique.
Fig 5.4 Structural Framework of LLDP
Fig 5.5 BDDP Frame Structure
A switch's identification, basic capabilities, and other
characteristics are determined by an OF controller via the
OF protocol, which includes sending "OFPT FEATURES
REQUEST" messages to all switches as part of the first
handshake.
Fig 5.6 Links discovery Via OFDP
Messages exchanged between OpenFlow OF switches and
controller for Links are shown below.
Fig 5.7 Messages flow between controller and Open Flow
OF switches for Links
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 16
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
The metadata of OpenFlow OF switches where BDDP
packets are received are included in "OFPT PACKET IN"
messages. Between OpenFlow OF switches and their
controller, messages are only transmitted in one direction.
These massages are also exchanged in the other direction.
Based on data from BDDP and metadata, the controller can
identify indirect multi-hop links between OpenFlow OF
switches after receiving these "OFPT PACKET IN" signals
and store them in its database to create a network topology.
Most crucially, the controller counts the number of hops
between OpenFlow OF switches using the TTL value. The
default time interval for this topology finding operation is 5
seconds. The total number of "OFPT PACKET IN"
messages received by the controller during this entire
discovery process is equal to double the number of "L"
active links accessible in the domain, and this number may
be calculated as below:
However, the total messages “OFPT_PACKET_OUT” sent
by controller can be computed as:
Number of Open Flow OF switches is denoted by S,
whereas active ports are denoted by P. All switches has P. 4
OpenFlow OF switches with 2 active ports each make up
our reference topology, hence "OFPT PACKET OUT"
messages issued by the controller are 2+2+2+2=8 in total.
The total number of BDDP OFPT PACKET OUT messages
sent by a controller to a switch can be reduced by
implementing OFDPv2[23], in which the Port ID TLV field
is set to 0 and will be ignored while the source MAC address
field has been set with the MAC address of the port through
which it is to be sent out. One "OFPT PACKET OUT"
message is sent by the controller for each switch, and the
total number of messages transmitted is equal to the number
of switches determined by OFDPv2.
As each OpenFlow OF switch's port is physically connected
to a MEMS switch's port, we don't need link discovery to
establish connectivity with the MAOS plane; instead, the
controller can create paths between any two points
dynamically, and topology database can statically store all
the data related to flows and port connections.
Table 5.1 Topology Database
5.1.3.3 Discovery of Host
Two methods have been applied for available host
discovery which is connected to the OpenFlow OF
switches.
As all active ports on all switches send BDDP "Packet Out"
messages, I Ports for which OpenFlow OF Switches do not
send "Packet In" messages are recognized as host ports
during link discovery. (ii) GARP message is sent by the
Host when Host is connected to an OpenFlow OF Switch.
HSRP and VRRP[24] use GARP to update the MAC
address table of L2 switches, In a broadcast domain GARP
is an advance notification mechanism to keep the controller
aware about host discovery and inserting flow entries of
MAC address in the flow tables of Open Flow OF switches,
thus updating other hosts’ ARP tables before their ARP
requests are made, and finally updating ARP tables of other
hosts when a new host is also connected to a
switch,however a host IP address or MAC address is
changed due to failover. Generated ARP requests are
actually special ARP packets with the source that is (host)
IP and destination.The destination broadcast MAC address
(ff:ff:ff:ff:ff:ff) is present in the MAC address field and the
Ethertype field fixt to 0x0806. The parameters listed below
are part of a GARP request message:
• FF:FF:FF:FF:FF:FF:FF:FF is the destination MAC
address (broadcast)
• Source MAC address: MAC address of the host
Example of GARP is shown in the below figure. IP address
of a Source = IP address of a Destination: Host transmitting
GARP Type ,IP address is: ARP (0x0806).
Fig 5.8 GARP Request Message
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 17
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Each host will ping its neighbor after GARP is finished to
verify connectivity and reachability. Currently, all hosts are
aware of one another's MAC addresses and assigned IP
addresses, which are visible in their ARP tables. To stop
BDDP packet propagation, SDN controllers disable link
discovery on the OpenFlow OF switch ports to which hosts
are connected after discovery. Similar to BPDU guard
security function in conventional switches, this suppression
also prevents BPDU propagation [27].
Table 5.2 Host-1 linked with OSW1-1 - ARP
Table 5.3 Host 2 linked with OSW1-1 – ARP
5.1.4 Discovered Routes & Forwarding Table Buildup
Information relating to interface of OpenFlow(OF) switchs
is received by Controller, plus list of all routes as well as
destinations (MAC/IP addresses) also become visible
through it, because it contains map of topology discovery of
the entire network and a centralised database too. Each route
found is given a Route-Tag by the SDN controller following
topology discovery. Allocating a Route-Tag has the main
aim of to distinctly recognize each respectively occured
route and then generate a forwarding table that gives
information about all of the destination (MAC/IP address)
addresses that may be reached using the same route. In order
to send a flow to an outgoing interface based on Route-
Tag,this forwarding table is installed by an SDN Controller
into the OF switch. The source address and destination
address are used to traditionally forward flow frame to next
hop because its content addressable memory table also
includes a list of MAC addresses that can be reached over
each link (port).later on when the same flow touches an
OpenFlow OF switch, it searches again all the flow tables
and look for the best possible match before forwarding the
flow to the accurate Host port after assuring its destination
address.
5.1.5 Building up of Flow Table by inserting Flow
entries
After entering into an OpenFlow OF switch the flow itself
again compares each incoming packet with one or more
flow tables, each of which contains multiple flow entries,
and then determines the action to be taken. Any sort of
matching can be used, for example matching an ingress
port, a source or destination MAC address or IP address, a
VLAN ID, a TCP/UDP port number, etc.
For providing maximum flexibility and support to diverse
types of data traffic passing through DCCN while supplying
delay-sensitive traffic with minimal latency, I will employ
a hybrid model of flow table entries in this thesis that
combines proactive and reactive modalities. The use of a
proactive model for flow entry is required for delay-
sensitive applications that demand low latency, for example
while making audio/video calls, live radio/TV
transmissions, financial banks transactions, and routine
heavy traffic like web browsing, data file/folder transfer,
and peer-to-peer traffic.
Fig 5.9 Anatomy - OpenFlow OF
Discovery of flow entries which have close match with the
related fields of the data traffic, it either executes the
particular action of flow entry or forwards it to a group table
to execute multiple actions. The importance of the priority
parameter cannot be overstated since flow entries are
prioritised, and if there are many flow entries, the one with
the highest priority will be used and the others will be
ignored. Table below shows different types of counters.
Table 5.4 Types of Counters
By issuing the command OFPMP PORT STATS to the
OpenFlow OF switch, the SDN controller continuously
receives the statistics of outgoing active links (Ports), such
is the quantity of Rx packets, Tx packets, Rx bytes, and Tx
bytes, as well as the time in seconds, dropped packets, and
Tx/Rx errors. Link utilisation of all the desired ports is
calculated by the controller based on the statistics that were
returned. The Link Utilization is calculated using the
following formula[28]:
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 18
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Fig 5.10 Formula to calculate Link Utilization
For each of the outgoing interfaces, the controller maintains
a specific link utilisation threshold. When the threshold is
exceeded, it consults its flowtable to filter flows with
outgoing interfaces that have exceeded the threshold.
Following the specification of ports, the SDN controller
determines whether any ports are available before creating
a Highband dat pathpath across the MEMS Plane.
If there is no match, the entry in the table is considered to
be incorrect, and the controller is notified by default. The
controller instructs the switch to take certain actions. The
"Flow Mod" message contains a variety of information,
including Buffer IDs, Timeouts, Actions, Priorities, and
more. Additionally, flow entries can be permanent or for a
limited period of time, and there are two types: "Hard
Timeout" and "Idle Timeout." Idle timeout refers to the
removal of an entry from the flow table if there is no
matching flow entry request during that time period. Hard
timeout refers to the maximum amount of time an entry can
remain in the flow table, regardless of whether a matching
entry is present. If Hard timeout is set to 0, it is deactivated.
As an illustration, in our reference topology diagram, Host
1 sends an HTTP request to Host 2 (let's say a web server)
following host discovery by GARP. It begins with a SYN
message. Host 1 sends a SYN message to the OSW1-1
switch, which checks its flow table upon receiving the
packet because it is the initial packet and likely has no flow
entries that match the packet. Table miss flow entry is the
term for this. Therefore, by encasing this TCP packet in a
"Packet IN" message, the switch passes it to the controller
by default. This Packet IN message contains the entire TCP
packet or its Buffer ID (for example, Buffer ID = 250, which
designates the location where the switch stores the whole
TCP message). Therefore, the controller will take a few
actions, such as returning a "Flow Mod" or "Packet Out"
message to the switch, where "Packet Out" includes
information regarding the switch's handling of that
particular as well as the whole encapsulated TCP packet or
the reference buffer ID that the switch uses to store this
packet. Send the TCP SYNpacket reference with buffer
ID=250 out of port 8 to host 2 if the switch OSW1-1
receives a "Packet Out" message.Then the switch is directed
to add another new flow entry to its flow table through the
"Flow Mod" message. When a similar packet arrives at the
switch in the future and matches its fields and masks, this
flow entry guides the switch what to do now.In this way the
message informs the related switch to route any TCP
requests from Host1's IP address or MAC address to Host4's
IP address or MAC address and finally to Port 8.
Additionally, it tells the switch to release the packet it had
been buffering with the BufferID of 250 and to carry out the
instructions in this message. H4 replies by sending a
SYN/ACK packet to the switch, which receives it but finds
no flow entry from Host2 to Host1 (yet another table miss).
In order to send this SYN/ACK packet to the controller for
additional analysis, the switch encapsulates it in a "Packet
In" message and gives it the reference buffer ID
BufferID=251. In response, the controller instructs the
switch to add a flow entry to the flow table and perform
some action, which is to forward the SYN/ACK message to
port7. The controller also sends the switch a Packet Out and
a Flow mod message.The remainder of the communication
between Host1 and Host2 would not reach the controller
after all of this because switches have flow entries in their
flow tables that tell them what to do with packets. The
switch routed HTTP reply and ACK messages directly, as
demonstrated in the following figs:
Fig 5.11 (a) HTTP request with Open Flow messaging
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 19
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Fig 5.11 (b) HTTP Reply-Open Flow messages
Following table is showin OF switch flow entries:
Table 5.5 Flow Table of OF Switch OSW1-1 Flow Entries
The very first entry instructs switch to broadcast packets
that arrive from the controller port, regardless of their ether
type, to all other switch ports, indicating that they are BDDP
messages, and to update the counter. Sec entry instructs the
switch to issue a GARP request and update the counter
regardless of the switch's fields if the ether type is 0x0806.
The third, fourth, and fifth lines are L2 matching and
forwarding, which instructs the switch to perform a specific
action specified by the Action field and update the counter
regardless of other fields if the packet arrives with a certain
DMAC. The sixth and seventh lines instruct IP to route
packets to a particular port based on their destination IP.
The eighth line is the default route, as shown for our
reference topology, and the final two flow entries are TCP
flow entries for HTTP requests and answers. When the bulk
of brief flows, sometimes referred to as "Mouse flows," start
flowing alongside long persistent high bandwidth data
flows, or "elephant flows," This results in a bandwidth
bottleneck. High bandwidth data flows, which can be
caused by, among other things, VM migration, database
backups, or other software like Hadoop, cause switches'
buffers to overflow, which slows down packet processing
and negatively affects applications that require low latency.
Nowadays, there are many virtual machines running on a
single server for various applications. For whatever reason,
one virtual machine running one application in Server Rack
1 started using a lot of bandwidth and CPU, which had an
adverse effect on other applications operating on the same
server. Since the switch buffers start to overflow, the
administrator is compelled to move the virtual machine to
Server Rack 2 of the same POD, which results in a
bandwidth bottleneck and increased latency on network
links, as shown in the figure below.
Fig 5.12 Congestion
Colored red indicate congestion on an active packet-
switched network link that a packet is attempting to cross.
As seen in the figure, switches alert their
management/controlled plane to network congestion and
increased delay.
Fig 5.13 Congestion Notification by Switches to Control
Plane
In reply, the management/control plane investigates the
packets to make clear their source and destination, consults
its topology database, and calculates the locations between
which high bandwidth data routes must be developed.
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 20
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Resultantly, the optical control plane gives messages to the
MAOS control plane, which is made up of MEMS-based
optical switches.
Fig 5.14 MEMS switches produce High data rate optical
path
As seen above, data is travelling across both packet-
switched network and a temporary, high-bandwidth
channel, which is depicted by the green colour and is
enabled by MEMS-based switches. Once the high persistent
flow has subsided and traffic flow has returned to normal,
the control plane will demolish the temporarily established
optical link and reallocate it as needed.
VI. SOFTWARE IMPLEMENTATION
6.1 Reference Network Topology
In order to implement the proposed software of the proposes
solution, the researcher came up with the below
conceptualized architecture.
Fig 6.1 Prototype testing Reference network topology
Our network topology consists of 2 pods, POD 1 and POD
2, with each pod expected to include eight hosts or servers,
two aggregation switches, and two access switches. POD1
is made up of two aggregation switches (SW3 & SW4), two
access (TOR) switches (SW5 & SW6), and eight hosts
(Hosts 1 through Host8) that produce traffic. POD2 consists
of 8 hosts, Host1 to Host8, 2 TOR switches, named SW9
and SW10, 2 aggregation switches, named as SW7 and
SW8, and 2 hosts.
6.2 Software used
The tools incorporated are shown in in the figure below:
Fig 6.2 Software Tool Used Mininet
Fig 6.3 Software Tool Used MiniEdit
Fig 6.4 Software Tool Used Open Day Light
Fig 6.5 Software Tool Used IPERF
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 21
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Fig 6.6 Software Tool Used WireShark
6.3 Preparation/Implementation
Below actions must be taken to prepare for this setup:
• Before running my Mininet and Open Day Light
virtual machines, I first downloaded and installed
Oracle VM VirtualBox.
• Next, construct a VM in VirtualBox, download the
Mininet virtual machine image, mount it on top of
the fresh virtual machine, and then install Mininet.
• Third, create a second VirtualBox virtual machine
and install the ODL controller setup.
Mininet and its GUI program MiniEdit (running on o1 VM
on VirtualBox) is used by researcher, to plan out and
simulate the reference topology. I require X forwarding in
order to run Miniedit and connect to Mininet over SSH. To
do this, I've used Putty and XMing. Open DayLight, an
external SDN-based OpenFlow controller, was utilised by
me to operate OpenFlow virtual switches and MEMS
switches (ODL Beryllium). I have generated data flow from
hosts and servers and measured several performance
characteristics using the IPERF, such as link bandwidth and
network latency. I used Wireshark to investigate data
packets and analyse variety of protocol messages on several
interfaces during the whole network topology.
6.4 Applications
The real deployment of HFPFMAOS was gained in two
main steps
Fig 6.7 Deployment Steps of HFPFMAOS
Fig 6.8 OF switches based Network topology - Access as
TOR switches
Links between hosts and OF switches are 4 Mb/s with a 10
msec delay, while links between all traditional switches and
between OF switches are 10 Mb/s with a 5 msec delay, as
demonstrated in the following figures:
Host, Nodes, Links and interfaces verification can be done
by these commands, mininet>nodes, mininet>net and
mininet>dump.
Fig 6.9 Creating Links Switches
Fig 6.10 Network/Nodes Verification
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 22
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Fig 6.11 OF Switch interfaces verification
Flow entries are shown by the researcher in the flow table
of OF switches OSW1-1, OSW1-2, OSW1-3 & OSW1-4
for my proposed solution HFPFMAOS which are as under:
For Discovery of HOST and ARP Table
mininet> ping all
Fig 6.12 Host Discovery by Pingall command for
Fig 6.13 Host pinging
For Links Discovery GARP Requests:
For Links Discovery BDDP Requests:
For Enable Layer 2 Forwarding
Generation of Traffic by Iperf
Fig 6.14 Generation of Traffic and Latency (Delay)
STEP2: Mininet’s Network topology with MEMS
In the second phase, I add another Open flow switch is
added which is called MEMSSW1 to the aggregation layer
beside conventional switches and connect it to OF switches
to test/apply the concept of MEMS switching (TOR). I
intentionally introduced the links' bandwidth and latency
with MEMSSW1 and keep it on a higher side, that is
1000Mb/s and 1msec, to testify the idea of MEMS. I set the
buffer size and throughput (speedup) of the links to be equal
to the link's bandwidth, or 1000 Mb/s, to demonstrate all
optical switching. This is how the network topology is
displayed:
Fig 6.15 Network Topology with MEMS
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 23
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
6.5 Code for Topology Creation:
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023
This article can be downloaded from here: www.ijaems.com 24
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
REFERENCES
[1] https://cisco.com/c/en/us/solutions/collateral/service-
provider/global-cloud-
indexgci/Cloud_Index_White_Paper.pdf
[2] https://www.ciscoknowledgenetwork.com/files 477_11-11-
2014-CiscoGCIDraftDeck2013-2018_CKN.pdf
[3] http://www.ciscopress.com/store/data-centre-virtualization-
fundamentals-understanding-9781587143243
[4] http://www.graybar.com/applications/data-centres/types
[5] http://www.buusinessn ewsdaily.com/4982-cloud-vs-data-
centre.html
[6] http://www.cyberciti.biz/faq/data-centre-standard-overview/
[7] http://www.graybar.com/applications/data-centres/tiers
[8] https://www.microsoft.com/en-us/research/publication/the-
cost-of-a-cloud-research-problems-in-data-centre-networks/.
[9] Cisco systems: Data centre: Load balancing data centre
services, 2004.
[10] http://research.microsoft.com/en-
us/um/people/dmaltz/papers/DC-Costs-CCR-editorial.pdf
[11] M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable,
commodity datacentre network architecture,” in ACM
SIGCOMM, Aug. 2008.
[12] C. Guo, G. Lu, D. Li, H. Wu, X. Zhang, Y. Shi, C. Tian, Y.
Zhang, and S. Lu, “BCube: a high performance, server-
centric network architecture for modular data centres,” in
ACM SIGCOMM, Aug. 2009.
[13] C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu, “DCell:
A scalable and fault-tolerant network structure for data
centres,” in ACM SIGCOMM, Aug. 2008, pp. 75–86.
[14] http://research.microsoft.com/en-
us/um/people/srikanth/data/vl2_sigcomm09.pdf
[15] “3D MEMS Optical switch with toroidal concavemirror”
https://www.ntt-
review.jp/archive/ntttechnical.php?contents=ntr201211ra1_s
.html
[16] “High-yield Fabrication Methods for MEMS Tilt Mirror
Array for Optical Switches” by Joji Yamaguchi †, Tomomi
Sakata, Nobuhiro Shimoyama, “State of the Art of Optical
Switching Technology for All-Optical Networks”
http://home.deib.polimi.it/bregni/papers/cscc2001_optswitc
h.pdf.
[17] http://sanmapyourtech.blogspot.com/2014_08_01_archive.h
tml
[18] https://www.sdxcentral.com/wp-
content/uploads/2015/11/2015_SDxCentral_-
SDN_Controllers-Report_Cisco_FINAL.pdf
[19] http://www.opendaylight.org/project/technical-overview
[20] http://h17007.www1.hpe.com/docs/networking/solutions/sd
n/devcentre/03_HP_OpenFlow
OF_Technical_Overview_TSG_v1_2013-10-01.pdf
[21] http://www.cisco.com/c/en/us/about/press/internet-protocol-
journal/back-issues/table-contents-59/161-sdn.html
[22] https://arxiv.org/ftp/arxiv/papers/1406/1406.0124.pdf
[23] http://upcommons.upc.edu/bitstream/handle/2117/77672/Cu
rrent+Trends+of+Discovery+Topology+in+SDN.pdf?seque
nce=1
[24] https://www.researchgate.net/file.PostFileLoader.html?id=5
54cd4f7d2fd64b73e8b456c&assetKey=AS%3A2737733144
12545%401442284052403
[25] http://networkengineering.stackexchange.com/questions/771
3/how-does-gratuitous-arp-work.
[26] http://www.taos.com/2014/07/16/understanding-gratuitous-
arps/
[27] https://live.paloaltonetworks.com/t5/Management-
Articles/Trigger-a-Gratuitous-ARP-GARP-from-a-Palo-
Alto-Networks-Device/ta-p/61962
[28] Poisoning Network Visibility in Software-Defined Networks:
New Attacks and Countermeasures
http://www.internetsociety.org/sites/default/files/10_4_2.pdf
[29] http://www.cisco.com/c/en/us/support/docs/ip/simple-
network-management-protocol-snmp/8141-calculate-
bandwidth-snmp.html
[30] http://internetofthingsagenda.techtarget.com/definition/micr
o-electromechanical-systems-MEMS
[31] https://www.opennetworking.org/images/stories/downloads/
sdn-resources/onf-specifications/OpenFlow OF/OpenFlow
OF-switch-v1.3.1.pdf
[32] https://www.opennetworking.org/images/stories/downloads/
sdn-resources/onf-specifications/OpenFlow OF/OpenFlow
OF-switch-v1.5.1.pdf
[33]https://www.mininet.org

More Related Content

PDF
Improving the Latency Value by Virtualizing Distributed Data Center and Auto...
PDF
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRES
PPTX
Lect#5 asdasdasdasdasdasdasdasdasd(2).pptx
PPTX
introduction to distributed computing.pptx
PDF
CC LECTURE NOTES (1).pdf
PDF
09e4150ead3fa7505a000000
PDF
Sdn whitepaper
PDF
A Case Study On Implementation Of Grid Computing To Academic Institution
Improving the Latency Value by Virtualizing Distributed Data Center and Auto...
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRES
Lect#5 asdasdasdasdasdasdasdasdasd(2).pptx
introduction to distributed computing.pptx
CC LECTURE NOTES (1).pdf
09e4150ead3fa7505a000000
Sdn whitepaper
A Case Study On Implementation Of Grid Computing To Academic Institution

Similar to Data Centre Network Optimization (20)

PDF
How to Ensure Next-Generation Services
PDF
OCC-Executive-Summary-20150323
PPTX
CloudComputingArchitecture Module 1 ppt.pptx
PDF
Week2
PPTX
cc_mod1.ppt useful for engineering students
PDF
E0332427
PDF
Critical Information Infrastructure Systems Worldwide
PPTX
Unighfchttcbhtgjgtrhyyfghgtfhhytght 1.pptx
PPTX
Cloud-Computing-32-40[1].pptx...,..........
PDF
Diseño de red isp cisco
PPT
Cloud computing
PPTX
it is the presentation which shows the various topics of the loud computing
PPTX
Cloud technology
PDF
Cloud & Data Center Networking
PDF
The International Journal of Engineering and Science (The IJES)
PPTX
Unit-I Introduction to Cloud Computing.pptx
PDF
Ant colony Optimization: A Solution of Load balancing in Cloud  
PDF
China Mobile - OpenStack Summit 2016/Red Hat Mini NFV Summit
PDF
TECHNICAL WHITE PAPER: NetBackup Appliances WAN Optimization
PPTX
Cloud Computer and Computing Fundamentals.pptx
How to Ensure Next-Generation Services
OCC-Executive-Summary-20150323
CloudComputingArchitecture Module 1 ppt.pptx
Week2
cc_mod1.ppt useful for engineering students
E0332427
Critical Information Infrastructure Systems Worldwide
Unighfchttcbhtgjgtrhyyfghgtfhhytght 1.pptx
Cloud-Computing-32-40[1].pptx...,..........
Diseño de red isp cisco
Cloud computing
it is the presentation which shows the various topics of the loud computing
Cloud technology
Cloud & Data Center Networking
The International Journal of Engineering and Science (The IJES)
Unit-I Introduction to Cloud Computing.pptx
Ant colony Optimization: A Solution of Load balancing in Cloud  
China Mobile - OpenStack Summit 2016/Red Hat Mini NFV Summit
TECHNICAL WHITE PAPER: NetBackup Appliances WAN Optimization
Cloud Computer and Computing Fundamentals.pptx
Ad

Recently uploaded (20)

PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPT
Mechanical Engineering MATERIALS Selection
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
PPT on Performance Review to get promotions
PDF
composite construction of structures.pdf
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
Sustainable Sites - Green Building Construction
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Foundation to blockchain - A guide to Blockchain Tech
Strings in CPP - Strings in C++ are sequences of characters used to store and...
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Lecture Notes Electrical Wiring System Components
Lesson 3_Tessellation.pptx finite Mathematics
Operating System & Kernel Study Guide-1 - converted.pdf
Mechanical Engineering MATERIALS Selection
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPT on Performance Review to get promotions
composite construction of structures.pdf
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Sustainable Sites - Green Building Construction
UNIT-1 - COAL BASED THERMAL POWER PLANTS
CH1 Production IntroductoryConcepts.pptx
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Ad

Data Centre Network Optimization

  • 1. International Journal of Advanced Engineering, Management and Science (IJAEMS) Peer-Reviewed Journal ISSN: 2454-1311 | Vol-9, Issue-9; Sep, 2023 Journal Home Page: https://ijaems.com/ Article DOI: https://dx.doi.org/10.22161/ijaems.99.1 This article can be downloaded from here: www.ijaems.com 1 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Data Centre Network Optimization Tajammul Hussain itajammuleng@gmail.com Received: 11 Jul 2023; Received in revised form: 15 Aug 2023; Accepted: 25 Aug 2023; Available online: 03 Sep Aug 2023 Abstract— Technology has advanced quite quickly in recent years, and IT and telecom infrastructure is constantly growing. Due to an increased importance and emphasis of cloud computing many growing and emerging enterprises have shifted their computing needs and wants ultimately to the cloud, which leads to increase in inter-server data trafficking and also the bandwidth required for Data Centre Networking (DCCN). Actually this multi-tier hierarchical architecture used in modern data centres is based on traditional Ethernet/fabric switches. Researchers goal was to improve the data centre communication network design such that the majority of its problems can be solved while still using the existing network infrastructure and with lower capital expenditures. This is achieved through the deployment of OpenFlow (OF) switches and the Microelectromechanical Systems (MEMS) based all Optical Switching (MAOS). This will be beneficial in decreasing network latency, power consumption, and CAPEX costs in addition to helping with scalability concerns, traffic management, and congestion control. Additionally, by implementing new virtualization techniques, we may enhance DCCN's resource consumption and cable problems. In order to address data centre challenges, the researcher came up with an entirely new novel flat data centre coordination network architecture which is named as “Hybrid Flow based Packet filtering” (HFPFMAOS), forwarding, and MEMS which is entirely based on optical switching, and will be finally controlled by Software Defined Network (SDN) controller. Keywords— Data Centre, Network Optimization, Software defined Networking, OpenFlow I. INTRODUCTION 1.1 Background Data Centre is specialized building that houses various IT resources, such as servers, data storage systems, and networking and communication devices. Data centres have become increasingly important in IT and data networks over the last few years as a result of the explosive expansion of information technology and internet consumption. Additionally, as a result of the emergence of cloud computing, server virtualization, social media, and mobile data, the number of servers deployed, storage space, and interconnected network equipment in data centres is growing tremendously. Data centre IT network's primary elements include: • Data Centre Communication Network • High Performance Computing Network • Storage Area Network Three categories of traffic can be used to categorize a Data Centre. • Traffic flows which are within the data centre. • Traffic flows which travel between data centres. • Traffic flows between data centre and the end user. Data centre IP traffic is increasing at a fast rate [1] as depicted in the below figure. The primary driver of this expansion is cloud computing, which by 2023 is predicted to account for 4/5 of all traffic in data centres [2]. Fig 1.1: Global Data Centre Traffic, 2016–2021 Global data centres traffic by destination is shown in the figure below.
  • 2. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 2 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Fig 1.2: Global Data Centres Traffic by Destination It is evident from the above chart that the majority of traffic occurs inside data centres, with the majority of that traffic travelling between servers and edge switches. Figure below depicts the traffic forecast between the various tiers of the data centre communication network. Fig 1.3 Traffic Forecast at Different Layers 1.2 Problem Statement Limited server capacity and high oversubscription rates, network congestion and latency issues, management of internal data-centre traffic, support for a variety of traffic patterns, scalability, agility, effective resource utilization, management, fault handling, and troubleshooting are among the problems that DCCN must deal with. When building a data centre, these issues must be taken into consideration. The problems or difficulties that the DCCN will encounter are as follows: • Limited capacity between servers and a high oversubscription rate as traffic moves through the layers of switches in a hierarchical fashion. • Problems with congestion and high network latency have a negative impact on delay-sensitive applications and affect network performance as a whole. • Controlling the nearly 75% of the overall traffic that is generated inside the data centre (i.e., east- west traffic between racks). • Ease of Management, fault handling and troubleshooting in the network. • Support for a range of traffic patterns, such as long-lasting, high-bandwidth "elephant" flows and short-lasting persistent "mouse" flows • As a network grows, difficulties with scalability, agility, and efficient resource use arise. 1.3 Research Contribution It relates to developing a low-cost, capital-efficient solution that will improve the scalability, traffic management, congestion control, and resource usage of data centre design. Academic contribution relates to proposing a solution HFPFMAOS using OpenFlow tools for: • Congestion Reduction • Improvement in Network Latency • Implementing Scalability • Traffic Management • Reducing power Consumption • CAPEX Reduction Practical contribution of this research relates to coming up with an innovative data centre topology architecture with improved performance. 1.4 Objectives • Application of Hybrid Flow to packet filtering /Forwarding and MEMS on the basis of an optical switching solution by utilizing different virtualization techniques. • Efficient utilization of data centre network resources. o Reducing Congestion o Improving Network Latency o Implementing Scalability o Traffic Management • Reducing Power Consumption and saving CAPEX. • Provisioning of centralized intelligence of whole network. 1.5 Research Questions • How to come up with a solution based on HFPFMAOS and MEMS based all optical switching? • How to achieve efficient utilization of data centre network resources? • How to reduce congestion, improving network latency, scalability and traffic management? • How to reduce power consumption and save CAPEX? • How to implement centralized intelligence of whole network? • How to implement network virtualization techniques?
  • 3. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 3 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ 1.6 Research Methodology OpenFlow (OF) switches were used in the access layer as TOR switches. OF messages were captured using Wire shark. Traffic was generated between various hosts (i.e. server machines) using iperf testing tool. Performance of the network and the delays were calculated/analyzed. For clarity, a reference data centre topology was used. Aggregation of the MEMS switch with conventional switch was done and connected it to the OpenFlow (OF) switches which were installed at the access layer. Network traffic was generated between various hosts and flow delays were observed. Researcher then directed flows via using MEMS switch and simultaneously monitored performance of the network. 1.7 Software’s The researcher used the following software’s for carrying out the requisite analysis: a) MININET b) Putty c) Wireshark d) Oracle VM Virtual Box e) Miniedit f) ODL Controller g) Xming II. DATA CENTRE (DC) 2.1 Introduction Data centre is basically an entirely one of its own kind of building which is invented to contains, supervises, and provides [3]. A data centre structure includes very unique building infrastructures, backup plans for power shortages or unavailability, cooling agents for systems, special equipment Chester’s, servers, mainframes, and High performance Computing (HPC) networks to handle variety of communication data, as well as well-structured sophisticated cabling, application of various software, monitoring centers, and well equipped physical security systems. Fig. 2.1 Basic Data Centre Diagram 2.2 Evolution First Data centre phase (called Phase 1.0) began in the 1950s and consisted of computer rooms with mainframe systems' CPUs and peripheral devices including storage, terminals, and printers, among other things. These monolithic software-based centralized systems give users little control over IT infrastructure and make heavy use of available resources. Phase 2.0 of data centres begins in the 1980s when client-server application models gain prominent. Currently, servers have replaced main frame computers, which are fairly compact and accessible through client PC-installed apps. In this phase, servers that carry out specific tasks are typically installed close to clients rather than the main IT infrastructure to avoid paying excessive bandwidth costs. When the internet began to take off in the 1990s and the use of web-based applications increased, this mandated that servers be placed centrally in properly constructed data centres. Thus, the data centre that arises from the mainframe computer room has gained significance in the 1990s. Fig. 2.2 [3] depicts the timeline of data centre evolution. Fig. 2.2 Evolution Phases Phase 3.0 began around the year 2000, and because technology evolved so quickly at this time, datacenter space, electricity, and the IT network are beginning to reach capacity. As a result, it is expensive to expand existing facilities or to build new ones. According to a 2005 Cisco IT research, around this time DC networks and servers were typically only used at 20% of their capacity. Application silos with discrete sets of servers, networks, and storage resources were the primary contributors to this problem. Later, a number of network consolidation projects are completed that provide new features, enhance resource utilization, decrease the number of network components and processes, and improve operational effectiveness. 2.3 Types The are two in number [4][5] • Private
  • 4. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 4 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ • Cloud Based 2.3.1 Private It is the on-site hardware that offers computing services and stores data within a local network that is managed by an organization's IT department. These are owned and run by small private/public businesses, governmental organization’s etc. 2.3.2 Cloud Based Another name used for cloud data centres is co- location/managed services provider. Co-located Data Centers were created and maintained to offer specialized infrastructure and provide various services to external sources and parties. Major elements that force businesses to either go for cloud computing or develop their own data centers are: • Business market requirements • Data privacy issues • The increasing cost of related equipment/infrastructure 2.4 Data Centre Communication Network (DCCN) Data services and data transit from the server to clients or other servers are the major goals of DCCN. These qualities are now required from the perspectives of dependability, expansion, and efficiency for data centre communication networks. • Availability • Scalability • Flexibility • Efficiency • Predictability Ethernet is the most ubiquitous and well-liked DCCN protocol for data lines, with an interface range ranging from 10Mbps to 100Gbps. However, as the volume of traffic through the DCCN increased, various constraints compelled the development and adoption of new virtualization technologies as well as the optimization of the data centre network through the use of cutting-edge networking strategies like MEMS all optical switching and conventional packet switching. 2.5 DC Tiers Various Tiers of Data Centres are shown in the figure below from Tier-1 to Tier-4. Fig. 2.3 DC Tiers 2.6 Design Factors for Data Centre Network • The following factors must be taken into account when planning and deploying DCCN architecture [3]. • Failure Impact & Application Resilience: All local and distant users will be unable to access their apps if DCCN fails. • • Connectivity between the server and host: Servers must be linked together using several redundant Ethernet connections. • Traffic Direction: In DCCN, the majority of traffic is between servers inside the data centre. • Agility: This term refers to the capacity for assigning any service or application to any server at any moment in the data centre network while maintaining sufficient performance isolation and security amongst various applications. • Growth Rate: Increasing number of servers, switches, and switch ports, etc., as customers and their data traffic increases. • Application Bandwidth Demand & Oversubscription in case demand increases. 2.7 DCCN Challenges Challenges for the Data Centers Communication Network fields are as follows: • Due to the fact that router links carry traffic going in and out of the data centre, the bandwidth available over these links for server communication across various data centre areas is constrained [8]. • Network congestion and latency issues. As in a data centre, where many applications are active and producing distinct packet flows that move both within and across the DCCN, a multi-tier hierarchical structure with numerous hops, there are times when packet aggregation creates a bandwidth constraint. As the receiver's buffer space for absorbing packet
  • 5. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 5 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ flows is running short, the incoming rate from numerous transmitters surpasses the rate at which it can handle packet flows. This congestion causes the delay time to lengthen and the receiver to begin dropping packets, which has an impact on various applications, particularly those that depend on latency, and lowers the network's overall performance. • Managing the read/write, backup, and replication traffic that moves within the data centre due to the separation of application servers, databases, and storage, which accounts for over 75% of all traffic (i.e., east-west traffic between racks). Additionally, because jobs are distributed across numerous servers and assigned in parallel processing, DCCN's internal traffic is increased. • Support for a range of traffic patterns, such as long- lasting, high-bandwidth "elephant" flows and short- lasting persistent "mouse" flows. Examples of entirely large data producers include particle accelerators, planes, trains, metros, undergrounds, self-driving cars, patient data in healthcare centers, etc. In the year 2014, the Boeing 787 produces 40 terabytes of onboard data every hour, of which 0.5 terabytes per hour is transmitted to a data centre. The majority of flows within a data centre are typically mouse flows, while only a few elephant flows contain the majority of the data. Managing every sort of traffic at once while keeping the overall network delay within bounds is therefore a major issue. • As a network grows, difficulties with scalability, agility, and efficient resource use arise. In addition to making fault handling and debugging more challenging as the number of communication devices increases, this will also result in an increase in management overhead bytes on the network. • As additional devices are added, energy requirements and consumption will increase. Currently, in traditional data centers, the top of the rack (TOR) switch, also known as the fabric interconnect, is where all of the servers in a rack are connected. Each rack has a number of servers, either rack mount servers or chassis-based servers. On one side, this aggregation layer provides communication between data centre clusters, allowing packet-based East-West traffic to pass through, while on the other, it also provides connectivity with the core network, allowing North-South traffic to pass. As a result, the aggregation layer is the layer where problems arise because 75% of the data center’s east-west traffic passes through it, causing a bandwidth bottleneck & an increase in latency. 2.8 DCCN Topologies The architecture of a data centre communication network (DCCN), which serves as the foundation for its applications, must be scalable, reliable, latency-free, and have adequate capacity to prevent network congestion. These qualities heavily rely on the network architecture in which the DCCN is installed and are crucial to the overall effectiveness of the data centre. Figure below depicts the Common DCCN Architecture, as stated by Cisco [9]. Fig 2.4 Common DCCN Architecture Layered architecture has the advantage of enhancing the flexibility, modularity, and resilience of networks. Every layer in this design performs a separate role for unique profiles, such as the routers in the core layer (i.e., the aggregation router and the border router), which use routing protocols to decide how to forward traffic for both ingress and egress. Core switches, which offer incredibly flexible, scalable connections with numerous aggregation layer switches, make up the Layer 2 domain. This layer serves as a default gateway and a focal point for server IP subnets. Between numerous aggregation layer switches, it is typically used to transfer traffic between servers, and stateful network devices such server load balancers and firewalls are attached to this layer. The switches that make up the access layer are typically used by servers to connect to and communicate with one another. It contributes to better network administration and is in charge of exchanging any kind of traffic between servers, including broadcast, multicast, and unicast. The bottom of the above illustration shows a server layer stacked in server racks. Numerous Virtual machines are being operated by these servers and assigned to various data centre applications. To manage and respond to requests from external clients arriving through the internet, an application is typically linked to several public IP addresses. These requests are split up among the pool of servers for processing by
  • 6. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 6 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ specialized equipment called a load balancer that is redundantly connected to the core switches [10]. 2.8.1 Fat-Tree Al-Fareset and colleagues [11] have proposed this DCCN architecture. It makes use of the fat-tree idea and aims to boost fault tolerance, end-to-end cross sectional bandwidth, scalability, and cost-effectiveness. With this topology, the core and several pods make up the entire infrastructure. Servers, TORS (Access switches), and aggregation switches make up the pods. Fig 2.5 Fat-Tree The topology adapts the data lines in the DCCN architecture to give a customized IP address scheme and multipath routing algorithm. Below figure compares 3 tier and fat-tree systems. Fig 2.6 Comparison of 3 Tier/Fat-Tree Topologies 2.8.2 B-Cube The structure [12] is suggested for the modular DCs created within containers and offers quick installation and migration, but scalability is limited because container data centres are not meant to be scaled up. Layers of COTS (commodity off the shelf) switches and servers make up this architecture, and they are in charge of packet forwarding. At level 0, the BCube architecture is made up of many BCube0 modules, each of which has a switch with n ports connected to n servers. n switches make up the BCube1 module at level 1, which is connected to n BCube0 modules or networks. One server from the BCube0 module is linked to each switch of BCube1. Fig 2.7 B-Cube Topology 2.8.3 DCell Guo et al proposed a recursively specified design. Mini switches and servers are used in this very marketable design to forward packets. According to below figure, Dcell1 is made up of n+1 DCell0 modules, each of which is connected to the others by a single link via its servers. Fig 2.8 DCell Topology Architecture 2.8.4 VL2 A Fat-Tree based DCCN design called Virtual Layer 2 (VL2) aims to provide a flat automated IP addressing scheme that makes it possible to install servers anywhere in the network without manually configuring their IP addresses. This makes it possible for any service to be allocated to any network server. In order to scale to a large server pool, it makes use of [14] address resolution based on end systems. Fig 2.9 Network Architecture VL2 In addition to all architectures, there is a need for architecture which can handle the present issues the DCCN is encountering as well as give backward compatibility with existing infrastructure and its architecture. In this thesis, I suggested the HFPFMOS data centre design, which can
  • 7. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 7 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ partially address the shortcomings of the existing architecture while simultaneously being backwards compatible with it. 2.8.5 HFPFMAOS The data centre communication network architecture (DCCN) called HFPFMAOS, is suggested as a solution to the various problems that data centres in the present face while maintaining the infrastructure. Particularly,in this architecture, standard switches are used at the aggregation layer. By monitoring all of the flows across the OF equipped switches, this will give us consolidated network intelligence and enable us to dynamically establish High bandwidth data paths between the data centre servers whenever and wherever they are needed. Additionally, it will lower network management overhead bytes and aid in quick defect detection and correction. The conceptualized design is shown below. Fig 2.10 Architecture – HFPFMAOS - DCCN’s III. MEMS OPTICAL SWITCHING 3.1 Introduction Interconnection between DCN demands a significant capacity increase due to the exponential daily growth of traffic. Therefore, the best way to relieve network congestion caused by electronic switched is to use optical switching. Although optical fibre has a very high bandwidth, it is constrained by electronic switching and transmission capacity. As switching is done in the optical domain, all optical switching can therefore play an important role. 3.2 OOO Switching (All Optical) Modifying optical cross connections, a circuit is created from the ingress node to the egress node in all optical switching, and data travels via this circuit entirely in optical form. This can assist in addressing electronic switched network bottleneck since data is switched in the optical domain directly rather than going through many optical- electrical-optical conversions. 3.3 OOO Switching Benefits • POD-based architecture is typically used in data centres, which results in low use of computing resources. However, optical switching enables sharing of computing resources among several PODs for optimal effectiveness. • Increases revenue by quickly deploying new services. • Less power is lost compared to a traditional electrical switch. • On-demand capacity creation and reallocation. • A remarkable increase in the data centre's operational efficiency as a result of the smooth functioning of applications and the efficient use of computational resources and 3D optical MEMS switching technology. • Boost defect detection and quality of service. 3.4 Technology using OOO Switching Different optical switching methods [17] are used by many switches. The same are shown in the below table: Table 3.1 Contrast of various Optical switching technologies 3.5 Micro Electro Mechanical Systems (MEMS) The branch of engineering MEMS (Micro electro mechanical system) which merged computer technology with minuscule mechanical components found in semiconductor chips, including actuators, gears, sensors, valves, mirrors, and valves [31]. It contains mechanical components, like micro-mirrors or sensors which h can be easily adjusted as needed, and reflect optical signals from input to output fibres placed within a tiny silicon chip that embedded micro-circuitry. Light beam can be reflected from the input to output by tilting/rotating the mirror at varied angles thus directing the traffic to the ports.
  • 8. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 8 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ 3.6 Design and Principle Below figure shows the Structure of 3D MEMS optical switch [16]. Fig. 3.1 Optical Switch Fig 3.2 Standard Format OS Fig 3.3 A toroidal concave mirror is used in a 3D MEMS optical switch. The tilt angle of MEMS mirror measures the incident angle at which the optical beam from the I/P collimator array meets the concave mirror after reflecting by the I/P MEMS mirror array. Then light beam is again converged with a shift in position "/" by this toroidal concave mirror, that does an optical Fourier transform. This shift"/" can be quantitatively expressed as follows using the concave mirror focal length and the MEMS mirror tilt angle: / = f x 2δ 512 ports can be obtained by using a 2X2 array of collimator arrays with 128 optical ports and MEMS mirror arrays with 128 ports. This makes it easier to fabricate high-capacity optical switches with a negligibly small cumulative pitch error. Fig 3.4 Schematic of 3D 512 MEMS optical ports switch 3.7 MEMS Mirror Structure The same is shown in the below figure. Fig 3.5 MEMS tilt mirror cross-sectional schematic In order to generate an air gap between the electrodes and the mirrors, substrates are individually produced, then linked together by flip chips. Electrostatic force is produced when electrodes and mirrors are connected by a voltage, and this force moves the mirrors. Therefore, we are able to adjust the tilt angle of these mirrors by supplying a driving voltage to each electrode. 3.7.1 Mirror Substrate They are formed of a single silicon crystal, giving the mirror incredibly steady movement. MEMS mirror is supported on X-Axis by two folded torsion springs, and the gimbal ring is connected to the base on the Y axis by a second pair of folded torsion springs [16]. The mirror cannot be dragged down and come into contact with the electrodes because of the great rigidity of these springs in the Z direction relative to torsion direction. As a result, moving the mirror on the X and Y axes makes it simple to reflect an optical beam in 3D space in a particular direction. Fig 3.6 High aspect torsion spring and MEMS mirror are snapped together via SEM
  • 9. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 9 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ 3.7.1.1 Mirror Electrode Substrate Fabrication Method A layer of polyimide is then spun over the created mirror pattern in step 7[16] (a) (b). The third process involves dry etching to create the pattern for the mirror opening (d) and resistant mask (c) on the opposite side of the bulk Si. The polyimide coating serves as an etching stopper. Following the dicing procedure, the polyimide layer is removed using oxygen plasma (g). "In-process sticking of the mirror" is the name of the dry procedure used to fabricate mirrors. AS Since the top surface of the mirror has an au coating, which gives optical beams high reflectivity, both sides of the mirror have this coating. Peak-to-valley difference can be utilized to assess the mirror surface's flatness, which in our case is 0.05 m, as the optical features of the switch made up of a MEMS mirror array depend on it. Fig 3.7 Mirror substrate fabrication process flow diagram 3.7.2 Driving of Electrode Substrate Mirrors move due to this electrostatic force. 3.7.2.1 Fabrication of Driving Electrode Substrate Fig 3.8 Electrodes’ SEM snap Fig 3.9 Driving Electrode Substrate’s Fabrication flow process 3.8 MEMS Mirror Movement Figure below shows connection between the voltage applied and tilt angle of mirror. Fig 3.10 Relationship between Voltage applied & Tilt Angle of Mirror+ Size, flatness, fill factor, and scan angle of the micro mirrors, along with their scan angle and fill factor, all have an impact on how many ports the 3D MEMS switch can support. 3.9 Key Advantages of MEMS Switches in DCCN • They offer any-to-any communication between servers, allowing for very low latency and the transfer of enormous amounts of data. which is much lower than IP switches' latency. • Can handle any data rate. • They can circumvent the packet-based aggregation network and offer direct, high-capacity pure optical data links between any TOR switches to reduce network latency and shift sensitive traffic between servers as needed.
  • 10. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 10 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ IV. SOFTWARE DEFINED NETWORKING (SDN) 4.1 Introduction Adoption of practice of creating and maintaining networks results in an architecture which is called Software Defined Networking (SDN). The major aim of this kind of architecture is to facilitate programmability for the control plane by separating the devices' that is control plane from their data/forwarding plane. 4.2 SDN Architecture Control and Data planes at present coexist on the same network device, or, to put it another way, all data flow functions, such as switching, forwarding, and routing, as well as the various protocols used to make these decisions, are housed on that one device. "Provide open user- controlled administration of the forwarding hardware of a network element," according to the definition of SDN in [7], is the fundamental objective. The following elements are part of an architecture based on software defined networking (SDN) are shown in the below figure: Fig 4.1: Architecture for Software Defined Networking (SDN). 4.2.1 Application Layer Applications can include network characteristics like forwarding schemes, manageability, and security policies, among others. With the aid of the controller, the application layer can abstract the overall view of all network components and use that data to provide the control layer the proper direction. In our approach, management programs that offer CLI/GUI for the network devices are operating along with network monitoring tools. 4.2.2 Control Plane It is network's "brain" and is in charge of managing and programming forwarding plane. The control plane employs a topology database to give an abstract picture of all the network components. 4.2.3 Data Plane The infrastructure layer, which is the lowest layer in the SDN network architecture and includes forwarding Network Elements like (Router, Switches). Data transmission, statistics collection, and information monitoring are among its primary functions. MEMS switches are present at the aggregate layer while OF switches are located at the access. 4.2.4 API’s which are North Bound These are software interfaces b/w controller's software modules and SDN applications. The informal name used for the interface between the application/control plane is called the Northbound interface. 4.2.5 API’s which are South Bound South bound APIs are standardized APIs (application program interfaces) that allow SDN controllers to communicate with switches and routers and other forwarding network hardware [18]. 4.2.6 East-West Protocols In a multi-controller-based architecture, these protocols are used to control how different controllers communicate with one another. In a broader sense, SDN presents a networking architecture in which choices regarding the routing of data traffic are made external to the actual switching hardware. Therefore, genuine network intelligence can be achieved in data centres when SDN, PFFS, and MOOOS are applied. Second, a secure standard protocol should exist to enable communication between SDN controllers and network devices. This logical architecture may be implemented differently by different vendor equipment, but from the standpoint of an SDN controller, it performs like a consistent logical switch. AS It is explained that OpenFlow (OF) is the first and most extensively utilized interface between the infrastructure layer (data plane) and the control plane because it satisfies both of these needs. 4.3 Open Flow (OF) The Open Networking Foundation (ONF) has standardized OpenFlow as the most popular southbound interface of SDN architecture. OF's first version, version 1.0, was created by Stanford University and then acquired by ONF. Version 1.5 of OF was released in December 2014. It is an open standard communication protocol that defines how one or more control servers interact with switches that are SDN compliant. The flow table entries are installed in the OF compliance switches by an OF controller so that traffic is sent in line with these flow entries. These flow tables can be used by network administrators to alter network configuration and data traffic patterns. Additionally, this protocol offers management tools for
  • 11. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 11 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ packet filtering and topology change control. The OF protocol is supported by nearly all of the well-known vendors, including Cisco, HP, IBM, Brocade, and others. There are two types of switches for OF or SDN compliance. 4.3.1 Open Flow-Only Switches These switches, which solely support OF operation and process all packets using the OF pipeline, are also known as Pure SDN switches or SDN-only switches. Fig 4.2 Architecture of Open Flow-Only Switch 4.3.2 Hybrid Switches with Open Flow The same are shown in the below figure. Fig 4.3 Architecture of Hybrid Switch 4.3.3 Basic Architecture of Open Flow (OF) The below figure shows the basic OpenFlow architecture. Fig 4.4 Architecture of Basic Open Flow (OF) Open Flow switches are deployed in the Access layer as TORs, which are connected to servers or end hosts on one end and traditional switches and MEMS switches at the Aggregation layer on the other. As a control plane, the ODL controller is set up to connect with the OF switch using the OF protocol via a secure channel using SSL or TSL (Transport Layer Security). Three tables make up the OF switch's logical design. Flow Table, Group Table, Meter Table, and Table of Groups 4.3.4 Flow Table It is the fundamental component of logical switch design that determines the action to be taken on each incoming packet by comparing it to a specific flow table made up of numerous flow entries. This packet might go via one or more flow tables that operate as pipelines. As demonstrated in fig. 4.5, a controller can add, remove, and alter flow entries from an OF switch's flow tables either proactively or reactively (in reaction to an incoming packet). Fig 4.5 Flow Table Entries Model Flow entries can’t be generated until something happens, or after the receipt of the respective packet, thus subsequent action is taken in compliance with the instructions by a controller, while in the case of proactive model, flow entries are generated as required in advance and the process is completed without checking from the controller. When a packet first reaches a switch, the switches OF agent software looks for flow table (ASIC for hardware switches) and software flow table (virtual switches). This information exchange is shown in the figure below. Fig 4.6 Pipeline Processing Figure below depicts the packet processing flow chart.
  • 12. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 12 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Fig 4.7 Open Flow(OF)-Packet Processing Flow Chart. Table 4.1 Main components of Flow Entry 4.3.4.1 Match Felds Values It is important here that not to pick any packets without matching field values as shown in the below table: Table 4.2 Match Field of Flow Entry 4.3.4.2 Priority Allocation Setting a priority along with Match fields results in a unique identity of flow entry. 4.3.4.3 Counter Checks It contains information such as the number of bytes actually received and number of missed packets too. 4.3.4.4 Directions Instructions are provided for side by side modifications in pipeline processing or desired set of actions. 4.3.4.5 Timeouts It is defined as the maximum time for which any switch remains idol before which the flow of packets also expires. 4.3.4.6 Cookies Any opaque data values selected by a relevant controller. The OF controller can utilize it to filter requests for flow deletion and modification as well as flow statistics entries. When processing packets, these values are not used. 4.3.5 Group Table In order to perform various operations that may have an impact on several flows, Flow Table may send its flows to Group Table. A group table is made up of entries for groups that have group identifiers. The principal elements of a group entry shown below: Table 4.3 Group Table 4.3.5.1 Identifier 32-bit unique identifier identifies the group entry. 4.3.5.2 Type of Group It is used to manage/maintain group types depicting them as “Required” and “Optional”. 4.3.5.3 Packet Counters Provide actual packets count which are generated and processed by a group. 4.3.5.4 Action buckets AB consists of complete flow of processes which are performed in a specific pattern for changing packets and then sending them to a port. This collection of actions is always carried out as a set of acts. A group entry may have zero or more buckets, with the exception of groups specified as "Required: Indirect," which only include one bucket. If a group doesn't have a bucket, it will immediately throw the packets into the air. Fig.4.8 OpenFlow packets pipeline Processing 4.3.6 Meter Table Having the ability to perform flow-related actions. It is made up of meter entries that specify per-flow meters. Personalized Service Code Point based metering also enables the division of packets into different according to data rate. Instead of using ports, meters are directly connected to flow entries. After measuring, they can influence the overall rate of all packet flows and be configured to perform a specific action. The key parts of a meter entry are presented below:
  • 13. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 13 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Table 4.4 Main components of Meter Entry 4.3.6.1 Identifier of Meter Meter entry is identified by a 32-bit identifier but in a distinct way. 4.3.6.2 Meter Bands Defines how to handle packets by listing rates of each band's meter bands in a list that is not ordered. Meter Band is used for measuring the response of the meter towards packets of various meter rate ranges which are calculated from all packets received by that particular meter from all inputs. Meter bands are used to specify. Only one-meter band processes a packet. Table 4.5 Meter Band 4.3.6.2.1 Band Types Packets processing ways. 4.3.6.2.2 Target Rate It is the required rate of a meter band. 4.3.6.2.3 Burst It establishes the metre band's granularity. 4.3.6.2.4 Counters Processing of packets by a meter band updates the counters. 4.3.6.2.5 Type specific arguments Parameters which are in essence optional for particular band types. Meter bands are ranked from 0 to 1, with 1 being the default band, depending on how much the desired rate increases. The packet is processed by only one-meter band when the measured rate exceeds the intended rate. 4.3.6.3 Counters They are used to figure out how many packets a meter has processed. In the same flow table, multiple flow entries may utilize various meters, the same meters, or no meters at all. Fig. 4.9 Hierarchical DSCP metering and metres. 4.3.7 Protocols for an Open Flow Channel Totally safe Open Flow channel is provided by the OF channel which is typically encrypted by TLS (Transport Layer Security) or SSL. The protocol that it employs for these purposes is known as the OF protocol [20]. 4.3.7.1 Controller_to_Switch Messages Messages that the controller initializes to control the switch's logical state, such as configuration, flow tables, group tables. 4.3.7.2 Asynchronous Messages Initiated by the switch, these messages comprise status updates to inform the controller of changes to the switch's condition and network events. Additionally, it has a "Packet-In" message that switches utilize to transfer packets to OF controllers when their flow tables do not match. 4.3.7.3 Symmetric Messages Hello messages are initiated by controller/switch, right after when the connection is made between two specific devices, or Echo messages are used to measure the latency and bandwidth of the connection between the switch and controller as well as to confirm that the device is functioning as intended. 4.4 Benefits of SDN SDN has the potential to be a promising technology for managing and solving different problems in data centre networks. Using the SDN technique, the network administrator only needs to declare these settings on a single place from which all devices are managed and directed by the SDN controller. Data centres are having scaling problems, particularly as the number of servers and virtual machines (VMs) that run on them rises and, later, as the requirement for migration (VM motion) rises. A significant bandwidth is needed for virtual machine migration and updating the MAC address table, which raises network latency and lowers overall network performance. In typical data centre architectures, users may encounter interruptions when accessing apps. Consequently, this researcher suggests that SDN, "All optical switching," and other virtualization technologies like OTV can be used to address this study area/gap.
  • 14. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 14 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Table 4.6 OpenFlow Messages V. HFPFMAOS 5.1 Conceptualized HFPFMAOS Solution As per this research contribution, suggested is the HFPFMAOS as a remedy for the problems and challenges facing DCCN (Hybrid Flow based packet filtering, forwarding & MEMS based all optical switching). Eight phases make up the implementation of this research concept: • The deployment of an SDN controller and an OF in TOR switches that link to the controller through SSL. • Aggregation-level MOOOS plane deployment and connection to SDN controller. • Development of database for the conceptualized network topology. • Flow based Packet Filtering and Forwarding – FPFSF • Monitoring of outgoing ports, computation of links' usage, and alerting the SDN controller. • Flow Table lookup and flow entry inspection are used to find the flows consuming more bandwidth and to inform the source and destination of those flows. • Building high-bandwidth data connections between TOR switches that correspond to the source and destination indicated. • Moving a particular flow of traffic to a high- bandwidth link constructed using a MOOOS plane, then moving it back to its normal path and breaking down the high-bandwidth link when that particular flow vanishes or resumes operating at its regular data rate. 5.1.1 Installation of Open Flow in Access and SDN controller At initial stages, the respective researcher first deploys an SDN controller and then turn on an Open Flow in all of the access layer switches (TOR), without disturbing the arrangement of the rest of the network's switches and allow them to work just as before inclusion of this setup. Then I establish a secure encrypted SSL (secure socket layer)/TCP connection between each OpenFlow OF enabled Access switch and the SDN controller. Through the OF protocol, this link is utilized to communicate between an SDN controller and an OF switch. This connectivity is further utilized by SDN controller as well as by a number of north-bound applications, which includes initial hello /hi messages (like OFPT HELLO, FEATURE REQUEST MESSAGE from controller to switch & then FEATURE REPLY MESSAGE in the same manner from switch to controller), topology discovery using LLDP (Link Layer Discovery Protocol) & BDDP (Broadcast Domain Discovery Protocol), thus start building data bases, message alerts (PACKET IN to controller & PACKET OUT) messages from table to switch which specifically flows over all optical path between OF switches. 5.1.2 Setting up Plane At the aggregation level, researcher deployed MAOS (optical switching based on MEMS) plane alongside conventional switches. The current packet-based switching continues to function in this approach together with the newly proposed MAOS to provide dynamically high speed data routes between servers for switching elephant traffic as needed. Plus, centralized control is exercised by connecting it to a central SDN controller, network elements(NEs) supervision, and control of traffic flow. Figure below shows the conceptualized architecture.
  • 15. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 15 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Fig 5.1 Conceptualized Architecture of HPMOOOS for DCCN At the aggregation layer, the MAOS plane is implemented side by side the conventional switches. 5.1.3 Developing Topology Database Fig 5.2 Database Topology Development Steps 5.1.3.1 Revelation of OpenFlow OF Switches Fig 5.3 OpenFlow OF Switches Process 5.1.3.2 Revelation of Active Links OFDP can detect indirect accessible multi-hop links (routes) between OpenFlow OF switches by leveraging the L2 discovery protocol known as BDDP [22]. Most well- known Open Source controllers, including ODL and Floodlight, adopt this technique. Fig 5.4 Structural Framework of LLDP Fig 5.5 BDDP Frame Structure A switch's identification, basic capabilities, and other characteristics are determined by an OF controller via the OF protocol, which includes sending "OFPT FEATURES REQUEST" messages to all switches as part of the first handshake. Fig 5.6 Links discovery Via OFDP Messages exchanged between OpenFlow OF switches and controller for Links are shown below. Fig 5.7 Messages flow between controller and Open Flow OF switches for Links
  • 16. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 16 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ The metadata of OpenFlow OF switches where BDDP packets are received are included in "OFPT PACKET IN" messages. Between OpenFlow OF switches and their controller, messages are only transmitted in one direction. These massages are also exchanged in the other direction. Based on data from BDDP and metadata, the controller can identify indirect multi-hop links between OpenFlow OF switches after receiving these "OFPT PACKET IN" signals and store them in its database to create a network topology. Most crucially, the controller counts the number of hops between OpenFlow OF switches using the TTL value. The default time interval for this topology finding operation is 5 seconds. The total number of "OFPT PACKET IN" messages received by the controller during this entire discovery process is equal to double the number of "L" active links accessible in the domain, and this number may be calculated as below: However, the total messages “OFPT_PACKET_OUT” sent by controller can be computed as: Number of Open Flow OF switches is denoted by S, whereas active ports are denoted by P. All switches has P. 4 OpenFlow OF switches with 2 active ports each make up our reference topology, hence "OFPT PACKET OUT" messages issued by the controller are 2+2+2+2=8 in total. The total number of BDDP OFPT PACKET OUT messages sent by a controller to a switch can be reduced by implementing OFDPv2[23], in which the Port ID TLV field is set to 0 and will be ignored while the source MAC address field has been set with the MAC address of the port through which it is to be sent out. One "OFPT PACKET OUT" message is sent by the controller for each switch, and the total number of messages transmitted is equal to the number of switches determined by OFDPv2. As each OpenFlow OF switch's port is physically connected to a MEMS switch's port, we don't need link discovery to establish connectivity with the MAOS plane; instead, the controller can create paths between any two points dynamically, and topology database can statically store all the data related to flows and port connections. Table 5.1 Topology Database 5.1.3.3 Discovery of Host Two methods have been applied for available host discovery which is connected to the OpenFlow OF switches. As all active ports on all switches send BDDP "Packet Out" messages, I Ports for which OpenFlow OF Switches do not send "Packet In" messages are recognized as host ports during link discovery. (ii) GARP message is sent by the Host when Host is connected to an OpenFlow OF Switch. HSRP and VRRP[24] use GARP to update the MAC address table of L2 switches, In a broadcast domain GARP is an advance notification mechanism to keep the controller aware about host discovery and inserting flow entries of MAC address in the flow tables of Open Flow OF switches, thus updating other hosts’ ARP tables before their ARP requests are made, and finally updating ARP tables of other hosts when a new host is also connected to a switch,however a host IP address or MAC address is changed due to failover. Generated ARP requests are actually special ARP packets with the source that is (host) IP and destination.The destination broadcast MAC address (ff:ff:ff:ff:ff:ff) is present in the MAC address field and the Ethertype field fixt to 0x0806. The parameters listed below are part of a GARP request message: • FF:FF:FF:FF:FF:FF:FF:FF is the destination MAC address (broadcast) • Source MAC address: MAC address of the host Example of GARP is shown in the below figure. IP address of a Source = IP address of a Destination: Host transmitting GARP Type ,IP address is: ARP (0x0806). Fig 5.8 GARP Request Message
  • 17. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 17 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Each host will ping its neighbor after GARP is finished to verify connectivity and reachability. Currently, all hosts are aware of one another's MAC addresses and assigned IP addresses, which are visible in their ARP tables. To stop BDDP packet propagation, SDN controllers disable link discovery on the OpenFlow OF switch ports to which hosts are connected after discovery. Similar to BPDU guard security function in conventional switches, this suppression also prevents BPDU propagation [27]. Table 5.2 Host-1 linked with OSW1-1 - ARP Table 5.3 Host 2 linked with OSW1-1 – ARP 5.1.4 Discovered Routes & Forwarding Table Buildup Information relating to interface of OpenFlow(OF) switchs is received by Controller, plus list of all routes as well as destinations (MAC/IP addresses) also become visible through it, because it contains map of topology discovery of the entire network and a centralised database too. Each route found is given a Route-Tag by the SDN controller following topology discovery. Allocating a Route-Tag has the main aim of to distinctly recognize each respectively occured route and then generate a forwarding table that gives information about all of the destination (MAC/IP address) addresses that may be reached using the same route. In order to send a flow to an outgoing interface based on Route- Tag,this forwarding table is installed by an SDN Controller into the OF switch. The source address and destination address are used to traditionally forward flow frame to next hop because its content addressable memory table also includes a list of MAC addresses that can be reached over each link (port).later on when the same flow touches an OpenFlow OF switch, it searches again all the flow tables and look for the best possible match before forwarding the flow to the accurate Host port after assuring its destination address. 5.1.5 Building up of Flow Table by inserting Flow entries After entering into an OpenFlow OF switch the flow itself again compares each incoming packet with one or more flow tables, each of which contains multiple flow entries, and then determines the action to be taken. Any sort of matching can be used, for example matching an ingress port, a source or destination MAC address or IP address, a VLAN ID, a TCP/UDP port number, etc. For providing maximum flexibility and support to diverse types of data traffic passing through DCCN while supplying delay-sensitive traffic with minimal latency, I will employ a hybrid model of flow table entries in this thesis that combines proactive and reactive modalities. The use of a proactive model for flow entry is required for delay- sensitive applications that demand low latency, for example while making audio/video calls, live radio/TV transmissions, financial banks transactions, and routine heavy traffic like web browsing, data file/folder transfer, and peer-to-peer traffic. Fig 5.9 Anatomy - OpenFlow OF Discovery of flow entries which have close match with the related fields of the data traffic, it either executes the particular action of flow entry or forwards it to a group table to execute multiple actions. The importance of the priority parameter cannot be overstated since flow entries are prioritised, and if there are many flow entries, the one with the highest priority will be used and the others will be ignored. Table below shows different types of counters. Table 5.4 Types of Counters By issuing the command OFPMP PORT STATS to the OpenFlow OF switch, the SDN controller continuously receives the statistics of outgoing active links (Ports), such is the quantity of Rx packets, Tx packets, Rx bytes, and Tx bytes, as well as the time in seconds, dropped packets, and Tx/Rx errors. Link utilisation of all the desired ports is calculated by the controller based on the statistics that were returned. The Link Utilization is calculated using the following formula[28]:
  • 18. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 18 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Fig 5.10 Formula to calculate Link Utilization For each of the outgoing interfaces, the controller maintains a specific link utilisation threshold. When the threshold is exceeded, it consults its flowtable to filter flows with outgoing interfaces that have exceeded the threshold. Following the specification of ports, the SDN controller determines whether any ports are available before creating a Highband dat pathpath across the MEMS Plane. If there is no match, the entry in the table is considered to be incorrect, and the controller is notified by default. The controller instructs the switch to take certain actions. The "Flow Mod" message contains a variety of information, including Buffer IDs, Timeouts, Actions, Priorities, and more. Additionally, flow entries can be permanent or for a limited period of time, and there are two types: "Hard Timeout" and "Idle Timeout." Idle timeout refers to the removal of an entry from the flow table if there is no matching flow entry request during that time period. Hard timeout refers to the maximum amount of time an entry can remain in the flow table, regardless of whether a matching entry is present. If Hard timeout is set to 0, it is deactivated. As an illustration, in our reference topology diagram, Host 1 sends an HTTP request to Host 2 (let's say a web server) following host discovery by GARP. It begins with a SYN message. Host 1 sends a SYN message to the OSW1-1 switch, which checks its flow table upon receiving the packet because it is the initial packet and likely has no flow entries that match the packet. Table miss flow entry is the term for this. Therefore, by encasing this TCP packet in a "Packet IN" message, the switch passes it to the controller by default. This Packet IN message contains the entire TCP packet or its Buffer ID (for example, Buffer ID = 250, which designates the location where the switch stores the whole TCP message). Therefore, the controller will take a few actions, such as returning a "Flow Mod" or "Packet Out" message to the switch, where "Packet Out" includes information regarding the switch's handling of that particular as well as the whole encapsulated TCP packet or the reference buffer ID that the switch uses to store this packet. Send the TCP SYNpacket reference with buffer ID=250 out of port 8 to host 2 if the switch OSW1-1 receives a "Packet Out" message.Then the switch is directed to add another new flow entry to its flow table through the "Flow Mod" message. When a similar packet arrives at the switch in the future and matches its fields and masks, this flow entry guides the switch what to do now.In this way the message informs the related switch to route any TCP requests from Host1's IP address or MAC address to Host4's IP address or MAC address and finally to Port 8. Additionally, it tells the switch to release the packet it had been buffering with the BufferID of 250 and to carry out the instructions in this message. H4 replies by sending a SYN/ACK packet to the switch, which receives it but finds no flow entry from Host2 to Host1 (yet another table miss). In order to send this SYN/ACK packet to the controller for additional analysis, the switch encapsulates it in a "Packet In" message and gives it the reference buffer ID BufferID=251. In response, the controller instructs the switch to add a flow entry to the flow table and perform some action, which is to forward the SYN/ACK message to port7. The controller also sends the switch a Packet Out and a Flow mod message.The remainder of the communication between Host1 and Host2 would not reach the controller after all of this because switches have flow entries in their flow tables that tell them what to do with packets. The switch routed HTTP reply and ACK messages directly, as demonstrated in the following figs: Fig 5.11 (a) HTTP request with Open Flow messaging
  • 19. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 19 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Fig 5.11 (b) HTTP Reply-Open Flow messages Following table is showin OF switch flow entries: Table 5.5 Flow Table of OF Switch OSW1-1 Flow Entries The very first entry instructs switch to broadcast packets that arrive from the controller port, regardless of their ether type, to all other switch ports, indicating that they are BDDP messages, and to update the counter. Sec entry instructs the switch to issue a GARP request and update the counter regardless of the switch's fields if the ether type is 0x0806. The third, fourth, and fifth lines are L2 matching and forwarding, which instructs the switch to perform a specific action specified by the Action field and update the counter regardless of other fields if the packet arrives with a certain DMAC. The sixth and seventh lines instruct IP to route packets to a particular port based on their destination IP. The eighth line is the default route, as shown for our reference topology, and the final two flow entries are TCP flow entries for HTTP requests and answers. When the bulk of brief flows, sometimes referred to as "Mouse flows," start flowing alongside long persistent high bandwidth data flows, or "elephant flows," This results in a bandwidth bottleneck. High bandwidth data flows, which can be caused by, among other things, VM migration, database backups, or other software like Hadoop, cause switches' buffers to overflow, which slows down packet processing and negatively affects applications that require low latency. Nowadays, there are many virtual machines running on a single server for various applications. For whatever reason, one virtual machine running one application in Server Rack 1 started using a lot of bandwidth and CPU, which had an adverse effect on other applications operating on the same server. Since the switch buffers start to overflow, the administrator is compelled to move the virtual machine to Server Rack 2 of the same POD, which results in a bandwidth bottleneck and increased latency on network links, as shown in the figure below. Fig 5.12 Congestion Colored red indicate congestion on an active packet- switched network link that a packet is attempting to cross. As seen in the figure, switches alert their management/controlled plane to network congestion and increased delay. Fig 5.13 Congestion Notification by Switches to Control Plane In reply, the management/control plane investigates the packets to make clear their source and destination, consults its topology database, and calculates the locations between which high bandwidth data routes must be developed.
  • 20. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 20 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Resultantly, the optical control plane gives messages to the MAOS control plane, which is made up of MEMS-based optical switches. Fig 5.14 MEMS switches produce High data rate optical path As seen above, data is travelling across both packet- switched network and a temporary, high-bandwidth channel, which is depicted by the green colour and is enabled by MEMS-based switches. Once the high persistent flow has subsided and traffic flow has returned to normal, the control plane will demolish the temporarily established optical link and reallocate it as needed. VI. SOFTWARE IMPLEMENTATION 6.1 Reference Network Topology In order to implement the proposed software of the proposes solution, the researcher came up with the below conceptualized architecture. Fig 6.1 Prototype testing Reference network topology Our network topology consists of 2 pods, POD 1 and POD 2, with each pod expected to include eight hosts or servers, two aggregation switches, and two access switches. POD1 is made up of two aggregation switches (SW3 & SW4), two access (TOR) switches (SW5 & SW6), and eight hosts (Hosts 1 through Host8) that produce traffic. POD2 consists of 8 hosts, Host1 to Host8, 2 TOR switches, named SW9 and SW10, 2 aggregation switches, named as SW7 and SW8, and 2 hosts. 6.2 Software used The tools incorporated are shown in in the figure below: Fig 6.2 Software Tool Used Mininet Fig 6.3 Software Tool Used MiniEdit Fig 6.4 Software Tool Used Open Day Light Fig 6.5 Software Tool Used IPERF
  • 21. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 21 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Fig 6.6 Software Tool Used WireShark 6.3 Preparation/Implementation Below actions must be taken to prepare for this setup: • Before running my Mininet and Open Day Light virtual machines, I first downloaded and installed Oracle VM VirtualBox. • Next, construct a VM in VirtualBox, download the Mininet virtual machine image, mount it on top of the fresh virtual machine, and then install Mininet. • Third, create a second VirtualBox virtual machine and install the ODL controller setup. Mininet and its GUI program MiniEdit (running on o1 VM on VirtualBox) is used by researcher, to plan out and simulate the reference topology. I require X forwarding in order to run Miniedit and connect to Mininet over SSH. To do this, I've used Putty and XMing. Open DayLight, an external SDN-based OpenFlow controller, was utilised by me to operate OpenFlow virtual switches and MEMS switches (ODL Beryllium). I have generated data flow from hosts and servers and measured several performance characteristics using the IPERF, such as link bandwidth and network latency. I used Wireshark to investigate data packets and analyse variety of protocol messages on several interfaces during the whole network topology. 6.4 Applications The real deployment of HFPFMAOS was gained in two main steps Fig 6.7 Deployment Steps of HFPFMAOS Fig 6.8 OF switches based Network topology - Access as TOR switches Links between hosts and OF switches are 4 Mb/s with a 10 msec delay, while links between all traditional switches and between OF switches are 10 Mb/s with a 5 msec delay, as demonstrated in the following figures: Host, Nodes, Links and interfaces verification can be done by these commands, mininet>nodes, mininet>net and mininet>dump. Fig 6.9 Creating Links Switches Fig 6.10 Network/Nodes Verification
  • 22. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 22 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ Fig 6.11 OF Switch interfaces verification Flow entries are shown by the researcher in the flow table of OF switches OSW1-1, OSW1-2, OSW1-3 & OSW1-4 for my proposed solution HFPFMAOS which are as under: For Discovery of HOST and ARP Table mininet> ping all Fig 6.12 Host Discovery by Pingall command for Fig 6.13 Host pinging For Links Discovery GARP Requests: For Links Discovery BDDP Requests: For Enable Layer 2 Forwarding Generation of Traffic by Iperf Fig 6.14 Generation of Traffic and Latency (Delay) STEP2: Mininet’s Network topology with MEMS In the second phase, I add another Open flow switch is added which is called MEMSSW1 to the aggregation layer beside conventional switches and connect it to OF switches to test/apply the concept of MEMS switching (TOR). I intentionally introduced the links' bandwidth and latency with MEMSSW1 and keep it on a higher side, that is 1000Mb/s and 1msec, to testify the idea of MEMS. I set the buffer size and throughput (speedup) of the links to be equal to the link's bandwidth, or 1000 Mb/s, to demonstrate all optical switching. This is how the network topology is displayed: Fig 6.15 Network Topology with MEMS
  • 23. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 23 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ 6.5 Code for Topology Creation:
  • 24. Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023 This article can be downloaded from here: www.ijaems.com 24 ©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License. http://creativecommons.org/licenses/by/4.0/ REFERENCES [1] https://cisco.com/c/en/us/solutions/collateral/service- provider/global-cloud- indexgci/Cloud_Index_White_Paper.pdf [2] https://www.ciscoknowledgenetwork.com/files 477_11-11- 2014-CiscoGCIDraftDeck2013-2018_CKN.pdf [3] http://www.ciscopress.com/store/data-centre-virtualization- fundamentals-understanding-9781587143243 [4] http://www.graybar.com/applications/data-centres/types [5] http://www.buusinessn ewsdaily.com/4982-cloud-vs-data- centre.html [6] http://www.cyberciti.biz/faq/data-centre-standard-overview/ [7] http://www.graybar.com/applications/data-centres/tiers [8] https://www.microsoft.com/en-us/research/publication/the- cost-of-a-cloud-research-problems-in-data-centre-networks/. [9] Cisco systems: Data centre: Load balancing data centre services, 2004. [10] http://research.microsoft.com/en- us/um/people/dmaltz/papers/DC-Costs-CCR-editorial.pdf [11] M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity datacentre network architecture,” in ACM SIGCOMM, Aug. 2008. [12] C. Guo, G. Lu, D. Li, H. Wu, X. Zhang, Y. Shi, C. Tian, Y. Zhang, and S. Lu, “BCube: a high performance, server- centric network architecture for modular data centres,” in ACM SIGCOMM, Aug. 2009. [13] C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu, “DCell: A scalable and fault-tolerant network structure for data centres,” in ACM SIGCOMM, Aug. 2008, pp. 75–86. [14] http://research.microsoft.com/en- us/um/people/srikanth/data/vl2_sigcomm09.pdf [15] “3D MEMS Optical switch with toroidal concavemirror” https://www.ntt- review.jp/archive/ntttechnical.php?contents=ntr201211ra1_s .html [16] “High-yield Fabrication Methods for MEMS Tilt Mirror Array for Optical Switches” by Joji Yamaguchi †, Tomomi Sakata, Nobuhiro Shimoyama, “State of the Art of Optical Switching Technology for All-Optical Networks” http://home.deib.polimi.it/bregni/papers/cscc2001_optswitc h.pdf. [17] http://sanmapyourtech.blogspot.com/2014_08_01_archive.h tml [18] https://www.sdxcentral.com/wp- content/uploads/2015/11/2015_SDxCentral_- SDN_Controllers-Report_Cisco_FINAL.pdf [19] http://www.opendaylight.org/project/technical-overview [20] http://h17007.www1.hpe.com/docs/networking/solutions/sd n/devcentre/03_HP_OpenFlow OF_Technical_Overview_TSG_v1_2013-10-01.pdf [21] http://www.cisco.com/c/en/us/about/press/internet-protocol- journal/back-issues/table-contents-59/161-sdn.html [22] https://arxiv.org/ftp/arxiv/papers/1406/1406.0124.pdf [23] http://upcommons.upc.edu/bitstream/handle/2117/77672/Cu rrent+Trends+of+Discovery+Topology+in+SDN.pdf?seque nce=1 [24] https://www.researchgate.net/file.PostFileLoader.html?id=5 54cd4f7d2fd64b73e8b456c&assetKey=AS%3A2737733144 12545%401442284052403 [25] http://networkengineering.stackexchange.com/questions/771 3/how-does-gratuitous-arp-work. [26] http://www.taos.com/2014/07/16/understanding-gratuitous- arps/ [27] https://live.paloaltonetworks.com/t5/Management- Articles/Trigger-a-Gratuitous-ARP-GARP-from-a-Palo- Alto-Networks-Device/ta-p/61962 [28] Poisoning Network Visibility in Software-Defined Networks: New Attacks and Countermeasures http://www.internetsociety.org/sites/default/files/10_4_2.pdf [29] http://www.cisco.com/c/en/us/support/docs/ip/simple- network-management-protocol-snmp/8141-calculate- bandwidth-snmp.html [30] http://internetofthingsagenda.techtarget.com/definition/micr o-electromechanical-systems-MEMS [31] https://www.opennetworking.org/images/stories/downloads/ sdn-resources/onf-specifications/OpenFlow OF/OpenFlow OF-switch-v1.3.1.pdf [32] https://www.opennetworking.org/images/stories/downloads/ sdn-resources/onf-specifications/OpenFlow OF/OpenFlow OF-switch-v1.5.1.pdf [33]https://www.mininet.org