multiple sessions may reduce the time required to fail over.
However, as shown in the experimental results in Section 5,
current VPN switching is fast enough to hide VPN failures
from OSs and maintain TCP connections.
4.2. Packet Relay System
Our packet relay system implementation performs two
operations: IP capsuling and NAT. IP capsuling is used
for identifying clients from servers and hiding IP address
changes from the guest OS. NAT is used for hiding IP ad-
dress changes from servers. Figure 6 shows these opera-
tions when an IP packet is sent from a client to a server.
The packet relay system uses a simple IP header like IP
over IP (IPIP) [5] to encapsule IP packets. When a packet
is sent from a client to a server, the capsuling IP header has
the relay client IP as the source and the relay server IP as the
destination (see Figure 6). The relay client IP is assigned by
VPN gateways and different from the IP address assigned to
the guest OS (guest IP). We currently assume that the guest
IP is unique in a cloud and can be used as a client ID. When
there are many clients, we can append a large (e.g., 128bits)
unique identifier, after the capsule IP header.
The relay server manages the relationship between the
guest and gateway IPs. In Figure 6, it records the rela-
tionship between “Guest IP” and “VPN GW IP”, allowing
the relay server to determine the appropriate gateway for a
client. The relay server updates the relationship manage-
ment table every time it receives an IP packet from clients.
The relay server also translates the source address of the
original IP header to its own IP address. This allows the
server to use the same IP address even when the relay client
IP address is changed. It also recalculates the check-sum of
IP packets. When the server sends back an IP packet to the
client, the packet follows a reverse flow path.
The packet relay client is implemented as a module run-
ning in BitVisor, and the packet relay server is implemented
as a user-level process running on the server.
5. Experiments
This section shows the experimental results of evaluating
our scheme. We first show the results of measuring failover
time. Then we show the results of measuring the perfor-
mance overhead of our virtualization layer.
5.1. Setup
We conducted the experiments in a wide-area distributed
Internet environment in Japan. We placed a client at
Tsukuba, and connected it to VPN gateways in data centers
in Tokyo, Yokohama, and Fukuoka. The straight-line dis-
tances from Tsukuba is approximately 56 km to Tokyo, 84
0
5
10
15
20
25
30
35
40
45
50
0 100 200 300 400 500 600 700 800 900
Yokohama
Fukuoka
Tokyo
1000
Latency[msec]
Elapsed time [sec]
Figure 7. Transition of Latency to Data Cen-
ters
0
2
4
6
8
10
0 5 10 15 20 25 30
VPNthroughput[Mbit/sec]
Elapsed time [sec]
Failure occurred point Failure recovered point
15.1 19.2
Figure 8. Throughput Transition over Failure
km to Yokohama, and 926 km to Fukuoka. These data cen-
ters are connected via leased lines with a maximum speed
of 100 Mbps. The leased lines are actually implemented
with yet another VPN provided by an ISP.
We used a PC equipped with Intel Core 2 Duo E8600
(3.33 GHz), PC2-6400 4 GB memory, and Intel X25-V
SSD 40 GB as the client machine at Tsukuba. The base
VMM is BitVisor 1.1, available from sourceforge site, and
the guest OS is Windows XP. Server machines are an HP
ProLiand BL280c blade server equipped with Xeon E5502
(1.86 GHz), 2 GB memory, and 120 GB 5400 rpm 2.5 inch
HDD. We used a Kernel-based Virtual Machine (KVM) and
CentOS 5.4 for both guest and host OSs. A cloud server and
the packet relay server each ran on a virtual machine.
5.2. Failover Time
The VPN failover time consists of the time to (1) detect a
VPN failure, (2) switch VPN gateways, and (3) restart TCP
transmission. According to our scheme, the time to detect a
VPN failure is expected to be (n+1)×RT O, where RT O is
calculated by Jacobson’s algorithm and n is the retry num-
ber. To verify estimated RT O, we first measured the tran-
sition of the network latency to each data center. Figure 7
shows the results. The latency to both Tokyo and Yokoyama
was around 15 msec and Fukuoka was 35 msec. In these la-
tencies, the estimated RT O for Tokyo was about 1 s.
We then measured the failover time. We intentionally
caused a VPN failure and measured the transition of TCP
6

More Related Content

PPTX
Byte blower basic setting full_v2
PDF
Building scalable network applications with Netty (as presented on NLJUG JFal...
PPTX
Using RabbitMQ and Netty library to implement RPC protocol
ODP
Building Netty Servers
PDF
Mininet: Moving Forward
PPT
Service Redundancy and Traffic Balancing Using Anycast
PPTX
KEY
Load Balancing with Apache
Byte blower basic setting full_v2
Building scalable network applications with Netty (as presented on NLJUG JFal...
Using RabbitMQ and Netty library to implement RPC protocol
Building Netty Servers
Mininet: Moving Forward
Service Redundancy and Traffic Balancing Using Anycast
Load Balancing with Apache

What's hot (20)

PPTX
Ip services
PDF
Preview of Apache Pulsar 2.5.0
PDF
Transaction Support in Pulsar 2.5.0
PDF
Lecture set 7
PDF
Opnet lab 5 solutions
PDF
Dhcp & dhcp relay agent in cent os 5.3
PPTX
Commication Framework in OpenStack
DOCX
Lab 4 final report
PDF
Covert Timing Channels using HTTP Cache Headers
PDF
TGIPulsar - EP #006: Lifecycle of a Pulsar message
PDF
Opnet lab 3 solutions
PDF
Single System Image Server Cluster
PDF
[233] level 2 network programming using packet ngin rtos
PPT
Np unit iii
PDF
Wireshark UDP
PDF
Iperf Tutorial
PDF
How Zhaopin contributes to Pulsar community
PDF
Covert Timing Channels using HTTP Cache Headers
PDF
IoT Protocol ( 22 Aug 2015 )
Ip services
Preview of Apache Pulsar 2.5.0
Transaction Support in Pulsar 2.5.0
Lecture set 7
Opnet lab 5 solutions
Dhcp & dhcp relay agent in cent os 5.3
Commication Framework in OpenStack
Lab 4 final report
Covert Timing Channels using HTTP Cache Headers
TGIPulsar - EP #006: Lifecycle of a Pulsar message
Opnet lab 3 solutions
Single System Image Server Cluster
[233] level 2 network programming using packet ngin rtos
Np unit iii
Wireshark UDP
Iperf Tutorial
How Zhaopin contributes to Pulsar community
Covert Timing Channels using HTTP Cache Headers
IoT Protocol ( 22 Aug 2015 )
Ad

Similar to Dropped image 170 (20)

PDF
Load Balancing 101
PDF
Dropped image 173
PDF
NAT 64 FPGA Implementation
PDF
IP forwarding architectures and Overlay Model
PPTX
PPTX
Networking.pptx
PPT
IPV4 addressing and it applications and IPV6
PPT
Np unit iv ii
PPTX
Scaling Kubernetes to Support 50000 Services.pptx
PDF
ACN solved Manual By Ketan.pdf
PPTX
Chapter 4--converted.pptx
PPT
CN_UNIT4.ppt ytutuim jykhjl fjghkhj gjjj
PPT
Lecture 5 internet-protocol_assignments
PPT
Np unit1
PPT
Transport layer of computer networking 2
PDF
Bt0072 computer networks 2
PPTX
EMEA Airheads- Manage Devices at Branch Office (BOC)
DOCX
TCP/IP 3RD SEM.2012 AUG.ASSIGNMENT
PDF
Services in kubernetes-KnolX .pdf
Load Balancing 101
Dropped image 173
NAT 64 FPGA Implementation
IP forwarding architectures and Overlay Model
Networking.pptx
IPV4 addressing and it applications and IPV6
Np unit iv ii
Scaling Kubernetes to Support 50000 Services.pptx
ACN solved Manual By Ketan.pdf
Chapter 4--converted.pptx
CN_UNIT4.ppt ytutuim jykhjl fjghkhj gjjj
Lecture 5 internet-protocol_assignments
Np unit1
Transport layer of computer networking 2
Bt0072 computer networks 2
EMEA Airheads- Manage Devices at Branch Office (BOC)
TCP/IP 3RD SEM.2012 AUG.ASSIGNMENT
Services in kubernetes-KnolX .pdf
Ad

More from Kazuhiko Kato (10)

PDF
自律連合型基盤システムの構築
PDF
Dependable Cloud Comuting
PDF
仮想化とシステムソフトウェア研究
PDF
ディペンダブルなクラウドコンピューティング基盤を目指して
PDF
研究室紹介(2014年度卒研生募集)
PDF
拡張によるエンパワーメント:情報システム工学の立場から
PDF
Dropped image 136
PDF
Dropped image 110
PDF
Rubyとプログラミング言語の潮流
PPT
IPAX 2004年9月 基調講演「ソフトウェア新創世紀へ向けて」
自律連合型基盤システムの構築
Dependable Cloud Comuting
仮想化とシステムソフトウェア研究
ディペンダブルなクラウドコンピューティング基盤を目指して
研究室紹介(2014年度卒研生募集)
拡張によるエンパワーメント:情報システム工学の立場から
Dropped image 136
Dropped image 110
Rubyとプログラミング言語の潮流
IPAX 2004年9月 基調講演「ソフトウェア新創世紀へ向けて」

Dropped image 170

  • 1. multiple sessions may reduce the time required to fail over. However, as shown in the experimental results in Section 5, current VPN switching is fast enough to hide VPN failures from OSs and maintain TCP connections. 4.2. Packet Relay System Our packet relay system implementation performs two operations: IP capsuling and NAT. IP capsuling is used for identifying clients from servers and hiding IP address changes from the guest OS. NAT is used for hiding IP ad- dress changes from servers. Figure 6 shows these opera- tions when an IP packet is sent from a client to a server. The packet relay system uses a simple IP header like IP over IP (IPIP) [5] to encapsule IP packets. When a packet is sent from a client to a server, the capsuling IP header has the relay client IP as the source and the relay server IP as the destination (see Figure 6). The relay client IP is assigned by VPN gateways and different from the IP address assigned to the guest OS (guest IP). We currently assume that the guest IP is unique in a cloud and can be used as a client ID. When there are many clients, we can append a large (e.g., 128bits) unique identifier, after the capsule IP header. The relay server manages the relationship between the guest and gateway IPs. In Figure 6, it records the rela- tionship between “Guest IP” and “VPN GW IP”, allowing the relay server to determine the appropriate gateway for a client. The relay server updates the relationship manage- ment table every time it receives an IP packet from clients. The relay server also translates the source address of the original IP header to its own IP address. This allows the server to use the same IP address even when the relay client IP address is changed. It also recalculates the check-sum of IP packets. When the server sends back an IP packet to the client, the packet follows a reverse flow path. The packet relay client is implemented as a module run- ning in BitVisor, and the packet relay server is implemented as a user-level process running on the server. 5. Experiments This section shows the experimental results of evaluating our scheme. We first show the results of measuring failover time. Then we show the results of measuring the perfor- mance overhead of our virtualization layer. 5.1. Setup We conducted the experiments in a wide-area distributed Internet environment in Japan. We placed a client at Tsukuba, and connected it to VPN gateways in data centers in Tokyo, Yokohama, and Fukuoka. The straight-line dis- tances from Tsukuba is approximately 56 km to Tokyo, 84 0 5 10 15 20 25 30 35 40 45 50 0 100 200 300 400 500 600 700 800 900 Yokohama Fukuoka Tokyo 1000 Latency[msec] Elapsed time [sec] Figure 7. Transition of Latency to Data Cen- ters 0 2 4 6 8 10 0 5 10 15 20 25 30 VPNthroughput[Mbit/sec] Elapsed time [sec] Failure occurred point Failure recovered point 15.1 19.2 Figure 8. Throughput Transition over Failure km to Yokohama, and 926 km to Fukuoka. These data cen- ters are connected via leased lines with a maximum speed of 100 Mbps. The leased lines are actually implemented with yet another VPN provided by an ISP. We used a PC equipped with Intel Core 2 Duo E8600 (3.33 GHz), PC2-6400 4 GB memory, and Intel X25-V SSD 40 GB as the client machine at Tsukuba. The base VMM is BitVisor 1.1, available from sourceforge site, and the guest OS is Windows XP. Server machines are an HP ProLiand BL280c blade server equipped with Xeon E5502 (1.86 GHz), 2 GB memory, and 120 GB 5400 rpm 2.5 inch HDD. We used a Kernel-based Virtual Machine (KVM) and CentOS 5.4 for both guest and host OSs. A cloud server and the packet relay server each ran on a virtual machine. 5.2. Failover Time The VPN failover time consists of the time to (1) detect a VPN failure, (2) switch VPN gateways, and (3) restart TCP transmission. According to our scheme, the time to detect a VPN failure is expected to be (n+1)×RT O, where RT O is calculated by Jacobson’s algorithm and n is the retry num- ber. To verify estimated RT O, we first measured the tran- sition of the network latency to each data center. Figure 7 shows the results. The latency to both Tokyo and Yokoyama was around 15 msec and Fukuoka was 35 msec. In these la- tencies, the estimated RT O for Tokyo was about 1 s. We then measured the failover time. We intentionally caused a VPN failure and measured the transition of TCP 6