SlideShare a Scribd company logo
OTV(OVERLAY TRANSPORT VIRTUALIZATION)
DATA CENTER | WWW.NETPROTOCOLXPERT.IN
• OTV(Overlay Transport Virtualization) is a technology that provide layer2
extension capabilities between different data centers.
• In its most simplest form OTV is a new DCI (Data Center Interconnect)
technology that routes MAC-based information by encapsulating traffic in
normal IP packets for transit.
OTV OVERVIEW
• Traditional L2VPN technologies, like EoMPLS and VPLS, rely heavily on
tunnels. Rather than creating stateful tunnels, OTV encapsulates layer2
traffic with an IP header and does not create any fixed tunnels.
• OTV only requires IP connectivity between remote data center sites, which
allows for the transport infrastructures to be layer2 based, layer3 based, or
even label switched. IP connectivity as the base requirement along some
additional connectivity requirements.
• OTV requires no changes to existing data centers to work, but it is
currently only supported on the Nexus 7000 series switches with M1-Series
linecards.
• A big enhancement OTV brings to the DCI realm, is its control plane functionality of
advertising MAC reachability information instead of relying on the traditional data plane
learning of MAC flooding. OTV refers to this concept as MAC routing, aka, MAC-in-IP
routing. The MAC-in-IP routing is done by encapsulating an Ethernet frame in an IP packet
before forwarded across the transport IP network. The action of encapsulating the traffic
between the OTV devices, creates what is called an overlay between the data center sites.
Think of an overlay as a logical multipoint bridged network between the sites.
• OTV is deployed on devices at the edge of the data center sites, called OTV Edge Devices.
These Edge Devices perform typical layer-2 learning and forwarding functions on their site
facing interfaces (the Internal Interfaces) and perform IP-based virtualization functions on
their core facing interface (the Join Interface) for traffic that is destined via the logical
bridge interface between DC sites (the Overlay Interface).
• Each Edge Device must have an IP address which is significant in the core/provider network
for reachability, but is not required to have any IGP relationship with the core. This allows
OTV to be inserted into any type of network in a much simpler fashion.
OTV TERMINOLOGY
OTV Edge Device
• Is a device (Nexus 7000 or Nexus 7000 VDC) that sits at the edge of a data center,
performing all the OTV functions, with the purpose to connect to other data centers.
• The OTV edge device is connected to the layer2 DC domain as well as the IP transport
network.
• With NX-OS 5.1 a maximum of two OTV edge devices can be deployed on a site to allow
for redundancy.
Internal Interfaces
• Are the layer2 interfaces on the OTV Edge Device configured as a trunk or an access
port.
• Internal interfaces take part in the STP domain and learns MAC addresses as per
normal.
Join Interface
• Is a layer3 interface on the OTV Edge Device that connects to the IP transport network.
• This interface is used as the source for OTV encapsulated traffic that is sent to remote OTV
Edge Devices.
• With NX-OS 5.1 this must be a physical interface or layer3 port channel. Loopback interfaces
are not supported in the current implementations.
• A single Join interface can be defined and associated with a given OTV overlay.
• Multiple overlays can also share the same Join interface.
Overlay Interface
• Is a logical multi-access and multicast-capable interface where all the OTV configuration are
explicitly defined by a user.
• The overlay interface acts as a logical bridge interface between DC sites to show which layer2
frames should be dynamically encapsulated by OTV before forwarded out the join interface.
OTV Control-Group
• Is the multicast group used by OTV speaker in an overlay network.
• A unique multicast address is required for each overlay group.
OTV Data-Group
• Used to encapsulate any layer2 multicast traffic that is extended across the overlay
• Extended-VLANs
• Are the VLANs that are explicitly allowed to be extended across the overlay between sites.
• If not explicitly allowed, the MAC addresses from a VLAN will not be advertised across the overlay.
Site-VLAN
• Is the VLAN used for communication between local OTV edge devices within a site.
• Is used to facilitate the role election of the AED (Authoritative Edge Devices).
• The Site Vlan must be exist and be active (defined or use default configuration).
OTV OPERATION
• As mentioned already OTV relies on the control plane to advertise MAC reachability information. The
underlying routing-protocol used in the control-plane is IS-IS (Intermediate System to Intermediate
System). IS-IS hellos and LSPs are encapsulated in the OTV IP multicast header. The OTV IS-IS
packets use a distinct Layer-2 multicast destination address. Therefore, OTV IS-IS packets do not
conflict with IS-IS packets used for other technologies
• The use of IS-IS is obvious to two reasons.
1. Firstly IS-IS does not use IP to carry routing information messages, it uses CLNS. Thus IS-IS is
neutral regarding the type of network addresses for which it can route traffic, making it ideal to
route MAC reachability information.
2. Secondly through the use of TLVs. A TLV (Type, Length, Value) is an encoding format used to add
optional information to data communication protocols like IS-IS. This is how IS-IS could easily be
extended to carry the new information fields
• Do you need to understand IS-IS to understand OTV? Do you need to know how the engine of a car
works in order to drive it? No, but its best to have at least a base understanding how it all fits together.
• Before any MAC reachability information can be exchanged, all OTV Edge Devices
must become “adjacent”.
• This is possible by using the specified OTV Control-Group across the transport
infrastructure to exchange the control protocol messages and advertise the MAC
reachability information.
• Additional documentation indicates that unicast transport support will be possible in
future Cisco software releases (post NX-OS 5.1), by using a concept known as an
“Adjacency Server”.
• For now lets focus on using multicast in the transport infrastructure. All OTV Edge
Devices should be configured to join a specific ASM (Any Source Multicast) group
where they simultaneously play the role of receiver and source. This is a multicast
host functionality.
CONTROL PLANE NEIGHBOUR DISCOVERY
1. Each OTV Edge Device sends an IGMP report to join the specific ASM group used to
carry control protocol exchanges. The Edge Devices join the group as hosts. This is IGMP,
not PIM.
2. OTV Hello packets are generated to all other OTV Edge Devices, to communicate the
local Edge Devices existence and to trigger the establishment of control plane
adjacencies.
3. The OTV Hello packets are sent across the logical overlay to the remote device. This
action implies that the original frames are OTV encapsulated by adding an external IP
header. The source IP address is that of the Join interface, and the destination is the
ASM multicast group as specified for the control traffic.
4. With a multicast enabled transport network, the multicast frames are replicated for each
OTV device that joined the multicast control group.
5. The receiving OTV Edge Devices strips of the encapsulated IP header. Before it is passed
to the control plane for processing.
6. Once the OTV Edge Devices have discovered each other, they are ready to exchange MAC
address reachability information, which follows a very similar process.
CONTROL PLANE MAC ADDRESS ADVERTISEMENT
1. The OTV Edge Device in the West data center site learns new MAC addresses (MAC A, B
and C on VLAN 100) on its internal interface. This is done via traditional data plane
learning.
2. An OTV Update message is created containing information for MAC A, MAC B and MAC C.
The message is OTV encapsulated and sent into the Layer 3 transport. Same as before, the
source IP address is that of the Join interface, and the destination is the ASM multicast
group.
3. With a multicast enabled transport network, the multicast frames are replicated for each
OTV device that joined the multicast control group. The OTV packets are decapsulated and
handed to the OTV control plane.
4. The MAC reachability information are imported into the MAC Address Tables (CAMs) of the
Edge Devices. The interface information to reach MAC-A, MAC-B and MAC-C is the IP
address from the Join Interface on the West OTV Edge Device.
• Once the control plane adjacencies between the OTV Edge Devices are established and MAC
address reachability information have been exchanged, traffic between the sites are possible.
• It is important to note, that traffic within a site will not traverse the overlay, and why should
it? The OTV edge will have the destination MAC pointing towards a local interface.
DATA PLANE TRAFFIC FORWARDING
1. A layer2 frame is destined to a MAC address learned via the overlay, since the next-hop
interface is across the overlay.
2. The original layer2 frame is OTV encapsulated and the DF bit is set. The overlay
encapsulation format is the layer2 frame encapsulated in UDP using GRE with the
destination port 8472 used. The source IP address is that of the Join interface. The
destination IP is not the multicast group, but the IP address of the Join Interface from a
remote OTV edge that advertised the MAC-3.
3. The Unicast frame is carried across the transport infrastructure directly to remote OTV
Edge Device. If it was a broadcast frame it would reach all remote OTV Edge Devices. If it
was a multicast it would only be forwarded to remote OTV Edge Devices that have
subscribing members using a OTV data group address.
4. The remote OTV Edge Device decapsulates the frame exposing the original Layer 2
packet.
5. The remote OTV performs a layer2 lookup on the original Ethernet frame and determines
the exit interface towards the destination.
6. The frame reaches it destination.
OTV HEADER FORMAT
• A 42 byte OTV header is added and the DF (Don’t Fragment) bit is set on ALL OTV packets.
The DF bit is set because the Nexus 7000 does not support fragmentation and reassembly.
The source VLAN ID and the Overlay ID is set, and the 802.1P priority bits from the original
layer2 frame is copied to the OTV header, before the OTV packet is IP encapsulated.
Increasing the MTU size of all transport interfaces are required for OTV. This challenge is
no different from other DCI technologies like VPLS and EoMPLS.

More Related Content

PDF
MPLS Presentation
PDF
Segment Routing
PPTX
Introduction to Software Defined Networking (SDN)
PPTX
Vxlan deep dive session rev0.5 final
PPTX
The Basic Introduction of Open vSwitch
PPTX
VXLAN Integration with CloudStack Advanced Zone
PDF
CCNAv5 - S3: Chapter 7 EIGRP
PPTX
Scaleway Approach to VXLAN EVPN Fabric
MPLS Presentation
Segment Routing
Introduction to Software Defined Networking (SDN)
Vxlan deep dive session rev0.5 final
The Basic Introduction of Open vSwitch
VXLAN Integration with CloudStack Advanced Zone
CCNAv5 - S3: Chapter 7 EIGRP
Scaleway Approach to VXLAN EVPN Fabric

What's hot (20)

PDF
Building DataCenter networks with VXLAN BGP-EVPN
PPTX
CCNA Quality of Services (QoS)
PDF
Segment Routing Lab
PPTX
VXLAN
PDF
Ether channel fundamentals
PDF
DPDK & Layer 4 Packet Processing
PPTX
OpenvSwitch Deep Dive
PDF
Route Redistribution
PPTX
Diameter based Interfaces and description
PPTX
VXLAN Practice Guide
PDF
PDF
Using Netconf/Yang with OpenDalight
PDF
Demystifying EVPN in the data center: Part 1 in 2 episode series
PDF
Operationalizing EVPN in the Data Center: Part 2
PDF
Introduction to OpenFlow
PDF
Segment Routing Advanced Use Cases - Cisco Live 2016 USA
PPTX
Mobile Networks Overview (2G / 3G / 4G-LTE)
ODP
VPC Implementation In OpenStack Heat
PPT
Network Security - Layer 2
Building DataCenter networks with VXLAN BGP-EVPN
CCNA Quality of Services (QoS)
Segment Routing Lab
VXLAN
Ether channel fundamentals
DPDK & Layer 4 Packet Processing
OpenvSwitch Deep Dive
Route Redistribution
Diameter based Interfaces and description
VXLAN Practice Guide
Using Netconf/Yang with OpenDalight
Demystifying EVPN in the data center: Part 1 in 2 episode series
Operationalizing EVPN in the Data Center: Part 2
Introduction to OpenFlow
Segment Routing Advanced Use Cases - Cisco Live 2016 USA
Mobile Networks Overview (2G / 3G / 4G-LTE)
VPC Implementation In OpenStack Heat
Network Security - Layer 2
Ad

Viewers also liked (16)

PPTX
Cisco OTV 
PPTX
OTV Configuration
PPTX
OTV PPT by NETWORKERS HOME
PPTX
Eigrp is restricted to stub connections
PPTX
Virtualization and Open Virtualization Format (OVF)
PPTX
L3 and Multicasting PPT by NETWORKERS HOME
PPTX
Storm-Control
PPTX
VDC by NETWORKERS HOME
PPTX
Securing management, control & data plane
PPTX
TCLSH and Macro Ping Test on Cisco Routers and Switches
PPTX
Cisco ISR 4351 Router
PPT
Open Virtualization Format - Detailed
PPTX
Common Layer 2 Threats, Attacks & Mitigation
PPTX
Cisco ASR 1001-X Router
PPTX
MPLS Layer 3 VPN
PDF
Open ID Explained
Cisco OTV 
OTV Configuration
OTV PPT by NETWORKERS HOME
Eigrp is restricted to stub connections
Virtualization and Open Virtualization Format (OVF)
L3 and Multicasting PPT by NETWORKERS HOME
Storm-Control
VDC by NETWORKERS HOME
Securing management, control & data plane
TCLSH and Macro Ping Test on Cisco Routers and Switches
Cisco ISR 4351 Router
Open Virtualization Format - Detailed
Common Layer 2 Threats, Attacks & Mitigation
Cisco ASR 1001-X Router
MPLS Layer 3 VPN
Open ID Explained
Ad

Similar to OTV(Overlay Transport Virtualization) (20)

PPTX
Otv notes
PDF
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PPTX
Automate programmable fabric in seconds with an open standards based solution
PPT
Network in brief
PDF
CisCon 2018 - Overlay Management Protocol e IPsec
PDF
Optical network evolution
PPTX
PLNOG 17 - Krzysztof Wilczyński - EVPN – zwycięzca w wyścigu standardów budow...
PPT
IIR Geneva 2001 Final
PDF
Pass Cisco 350-601 DCCOR Exam with Certifiedumps – Real Dumps for Data Center...
PDF
Carrier ethernet-services-the-future-public-multivendor1976
PDF
Carrier ethernet-services-the-future-public-multivendor1976
PPTX
OpenFlow Extensions
PDF
Capacitacion 2018
PPSX
HPE Training uts prs PowerPoint presentation
PPTX
Chapter 3 1-network_design_with_internet_tools - Network Design
PPTX
In Search of Low Cost Bandwidth
PDF
Gaurab Ixp Tutorial
PDF
Optical and mobile networks: friends or foes?
PDF
ccna practical notes
PPT
Chapter1ccna
Otv notes
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
Automate programmable fabric in seconds with an open standards based solution
Network in brief
CisCon 2018 - Overlay Management Protocol e IPsec
Optical network evolution
PLNOG 17 - Krzysztof Wilczyński - EVPN – zwycięzca w wyścigu standardów budow...
IIR Geneva 2001 Final
Pass Cisco 350-601 DCCOR Exam with Certifiedumps – Real Dumps for Data Center...
Carrier ethernet-services-the-future-public-multivendor1976
Carrier ethernet-services-the-future-public-multivendor1976
OpenFlow Extensions
Capacitacion 2018
HPE Training uts prs PowerPoint presentation
Chapter 3 1-network_design_with_internet_tools - Network Design
In Search of Low Cost Bandwidth
Gaurab Ixp Tutorial
Optical and mobile networks: friends or foes?
ccna practical notes
Chapter1ccna

More from NetProtocol Xpert (20)

PPTX
Basic Cisco ASA 5506-x Configuration (Firepower)
PPTX
Dynamic ARP Inspection (DAI)
PPTX
IP Source Guard
PPTX
DHCP Snooping
PPTX
Password Recovery
PPTX
Application & Data Center
PPTX
Point to-point protocol (ppp), PAP & CHAP
PPTX
Avoid DNS lookup when mistyping a command
PPTX
Private VLANs
PPTX
MTU (maximum transmission unit) & MRU (maximum receive unit)
PPTX
Regular expression examples
PPTX
Converting ipv4 to ipv6 and vice versa
PPTX
Password recovery cisco catalyst 3850
PPTX
Cisco 2960x switch password recovery
PPTX
VMware ESXi 6.0 Installation Process
PPTX
EtherChannel Configuration
PPTX
EIGRP (Enhanced Interior Gateway Routing Protocol)
PPTX
OSPF External Route Summarization
PPTX
OSPF Internal Route Summarization
PPTX
Redistribution into OSPF
Basic Cisco ASA 5506-x Configuration (Firepower)
Dynamic ARP Inspection (DAI)
IP Source Guard
DHCP Snooping
Password Recovery
Application & Data Center
Point to-point protocol (ppp), PAP & CHAP
Avoid DNS lookup when mistyping a command
Private VLANs
MTU (maximum transmission unit) & MRU (maximum receive unit)
Regular expression examples
Converting ipv4 to ipv6 and vice versa
Password recovery cisco catalyst 3850
Cisco 2960x switch password recovery
VMware ESXi 6.0 Installation Process
EtherChannel Configuration
EIGRP (Enhanced Interior Gateway Routing Protocol)
OSPF External Route Summarization
OSPF Internal Route Summarization
Redistribution into OSPF

Recently uploaded (20)

PPTX
Construction Project Organization Group 2.pptx
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
web development for engineering and engineering
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Geodesy 1.pptx...............................................
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
additive manufacturing of ss316l using mig welding
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
OOP with Java - Java Introduction (Basics)
PDF
Structs to JSON How Go Powers REST APIs.pdf
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Construction Project Organization Group 2.pptx
Lecture Notes Electrical Wiring System Components
Lesson 3_Tessellation.pptx finite Mathematics
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
web development for engineering and engineering
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Geodesy 1.pptx...............................................
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
additive manufacturing of ss316l using mig welding
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
UNIT 4 Total Quality Management .pptx
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
Internet of Things (IOT) - A guide to understanding
OOP with Java - Java Introduction (Basics)
Structs to JSON How Go Powers REST APIs.pdf
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
UNIT-1 - COAL BASED THERMAL POWER PLANTS

OTV(Overlay Transport Virtualization)

  • 1. OTV(OVERLAY TRANSPORT VIRTUALIZATION) DATA CENTER | WWW.NETPROTOCOLXPERT.IN
  • 2. • OTV(Overlay Transport Virtualization) is a technology that provide layer2 extension capabilities between different data centers. • In its most simplest form OTV is a new DCI (Data Center Interconnect) technology that routes MAC-based information by encapsulating traffic in normal IP packets for transit.
  • 3. OTV OVERVIEW • Traditional L2VPN technologies, like EoMPLS and VPLS, rely heavily on tunnels. Rather than creating stateful tunnels, OTV encapsulates layer2 traffic with an IP header and does not create any fixed tunnels. • OTV only requires IP connectivity between remote data center sites, which allows for the transport infrastructures to be layer2 based, layer3 based, or even label switched. IP connectivity as the base requirement along some additional connectivity requirements. • OTV requires no changes to existing data centers to work, but it is currently only supported on the Nexus 7000 series switches with M1-Series linecards.
  • 4. • A big enhancement OTV brings to the DCI realm, is its control plane functionality of advertising MAC reachability information instead of relying on the traditional data plane learning of MAC flooding. OTV refers to this concept as MAC routing, aka, MAC-in-IP routing. The MAC-in-IP routing is done by encapsulating an Ethernet frame in an IP packet before forwarded across the transport IP network. The action of encapsulating the traffic between the OTV devices, creates what is called an overlay between the data center sites. Think of an overlay as a logical multipoint bridged network between the sites. • OTV is deployed on devices at the edge of the data center sites, called OTV Edge Devices. These Edge Devices perform typical layer-2 learning and forwarding functions on their site facing interfaces (the Internal Interfaces) and perform IP-based virtualization functions on their core facing interface (the Join Interface) for traffic that is destined via the logical bridge interface between DC sites (the Overlay Interface). • Each Edge Device must have an IP address which is significant in the core/provider network for reachability, but is not required to have any IGP relationship with the core. This allows OTV to be inserted into any type of network in a much simpler fashion.
  • 6. OTV Edge Device • Is a device (Nexus 7000 or Nexus 7000 VDC) that sits at the edge of a data center, performing all the OTV functions, with the purpose to connect to other data centers. • The OTV edge device is connected to the layer2 DC domain as well as the IP transport network. • With NX-OS 5.1 a maximum of two OTV edge devices can be deployed on a site to allow for redundancy. Internal Interfaces • Are the layer2 interfaces on the OTV Edge Device configured as a trunk or an access port. • Internal interfaces take part in the STP domain and learns MAC addresses as per normal.
  • 7. Join Interface • Is a layer3 interface on the OTV Edge Device that connects to the IP transport network. • This interface is used as the source for OTV encapsulated traffic that is sent to remote OTV Edge Devices. • With NX-OS 5.1 this must be a physical interface or layer3 port channel. Loopback interfaces are not supported in the current implementations. • A single Join interface can be defined and associated with a given OTV overlay. • Multiple overlays can also share the same Join interface. Overlay Interface • Is a logical multi-access and multicast-capable interface where all the OTV configuration are explicitly defined by a user. • The overlay interface acts as a logical bridge interface between DC sites to show which layer2 frames should be dynamically encapsulated by OTV before forwarded out the join interface.
  • 8. OTV Control-Group • Is the multicast group used by OTV speaker in an overlay network. • A unique multicast address is required for each overlay group. OTV Data-Group • Used to encapsulate any layer2 multicast traffic that is extended across the overlay • Extended-VLANs • Are the VLANs that are explicitly allowed to be extended across the overlay between sites. • If not explicitly allowed, the MAC addresses from a VLAN will not be advertised across the overlay. Site-VLAN • Is the VLAN used for communication between local OTV edge devices within a site. • Is used to facilitate the role election of the AED (Authoritative Edge Devices). • The Site Vlan must be exist and be active (defined or use default configuration).
  • 9. OTV OPERATION • As mentioned already OTV relies on the control plane to advertise MAC reachability information. The underlying routing-protocol used in the control-plane is IS-IS (Intermediate System to Intermediate System). IS-IS hellos and LSPs are encapsulated in the OTV IP multicast header. The OTV IS-IS packets use a distinct Layer-2 multicast destination address. Therefore, OTV IS-IS packets do not conflict with IS-IS packets used for other technologies • The use of IS-IS is obvious to two reasons. 1. Firstly IS-IS does not use IP to carry routing information messages, it uses CLNS. Thus IS-IS is neutral regarding the type of network addresses for which it can route traffic, making it ideal to route MAC reachability information. 2. Secondly through the use of TLVs. A TLV (Type, Length, Value) is an encoding format used to add optional information to data communication protocols like IS-IS. This is how IS-IS could easily be extended to carry the new information fields • Do you need to understand IS-IS to understand OTV? Do you need to know how the engine of a car works in order to drive it? No, but its best to have at least a base understanding how it all fits together.
  • 10. • Before any MAC reachability information can be exchanged, all OTV Edge Devices must become “adjacent”. • This is possible by using the specified OTV Control-Group across the transport infrastructure to exchange the control protocol messages and advertise the MAC reachability information. • Additional documentation indicates that unicast transport support will be possible in future Cisco software releases (post NX-OS 5.1), by using a concept known as an “Adjacency Server”. • For now lets focus on using multicast in the transport infrastructure. All OTV Edge Devices should be configured to join a specific ASM (Any Source Multicast) group where they simultaneously play the role of receiver and source. This is a multicast host functionality.
  • 12. 1. Each OTV Edge Device sends an IGMP report to join the specific ASM group used to carry control protocol exchanges. The Edge Devices join the group as hosts. This is IGMP, not PIM. 2. OTV Hello packets are generated to all other OTV Edge Devices, to communicate the local Edge Devices existence and to trigger the establishment of control plane adjacencies. 3. The OTV Hello packets are sent across the logical overlay to the remote device. This action implies that the original frames are OTV encapsulated by adding an external IP header. The source IP address is that of the Join interface, and the destination is the ASM multicast group as specified for the control traffic. 4. With a multicast enabled transport network, the multicast frames are replicated for each OTV device that joined the multicast control group. 5. The receiving OTV Edge Devices strips of the encapsulated IP header. Before it is passed to the control plane for processing. 6. Once the OTV Edge Devices have discovered each other, they are ready to exchange MAC address reachability information, which follows a very similar process.
  • 13. CONTROL PLANE MAC ADDRESS ADVERTISEMENT
  • 14. 1. The OTV Edge Device in the West data center site learns new MAC addresses (MAC A, B and C on VLAN 100) on its internal interface. This is done via traditional data plane learning. 2. An OTV Update message is created containing information for MAC A, MAC B and MAC C. The message is OTV encapsulated and sent into the Layer 3 transport. Same as before, the source IP address is that of the Join interface, and the destination is the ASM multicast group. 3. With a multicast enabled transport network, the multicast frames are replicated for each OTV device that joined the multicast control group. The OTV packets are decapsulated and handed to the OTV control plane. 4. The MAC reachability information are imported into the MAC Address Tables (CAMs) of the Edge Devices. The interface information to reach MAC-A, MAC-B and MAC-C is the IP address from the Join Interface on the West OTV Edge Device. • Once the control plane adjacencies between the OTV Edge Devices are established and MAC address reachability information have been exchanged, traffic between the sites are possible. • It is important to note, that traffic within a site will not traverse the overlay, and why should it? The OTV edge will have the destination MAC pointing towards a local interface.
  • 15. DATA PLANE TRAFFIC FORWARDING
  • 16. 1. A layer2 frame is destined to a MAC address learned via the overlay, since the next-hop interface is across the overlay. 2. The original layer2 frame is OTV encapsulated and the DF bit is set. The overlay encapsulation format is the layer2 frame encapsulated in UDP using GRE with the destination port 8472 used. The source IP address is that of the Join interface. The destination IP is not the multicast group, but the IP address of the Join Interface from a remote OTV edge that advertised the MAC-3. 3. The Unicast frame is carried across the transport infrastructure directly to remote OTV Edge Device. If it was a broadcast frame it would reach all remote OTV Edge Devices. If it was a multicast it would only be forwarded to remote OTV Edge Devices that have subscribing members using a OTV data group address. 4. The remote OTV Edge Device decapsulates the frame exposing the original Layer 2 packet. 5. The remote OTV performs a layer2 lookup on the original Ethernet frame and determines the exit interface towards the destination. 6. The frame reaches it destination.
  • 18. • A 42 byte OTV header is added and the DF (Don’t Fragment) bit is set on ALL OTV packets. The DF bit is set because the Nexus 7000 does not support fragmentation and reassembly. The source VLAN ID and the Overlay ID is set, and the 802.1P priority bits from the original layer2 frame is copied to the OTV header, before the OTV packet is IP encapsulated. Increasing the MTU size of all transport interfaces are required for OTV. This challenge is no different from other DCI technologies like VPLS and EoMPLS.