SlideShare a Scribd company logo
IBM FlashSystem 810 and IBM SVC
Previously, products TMS Ramsan not officially supplied to Ukraine. Once this division was sold to IBM - and
the situation has changed for me, on occasion, got TMS Ramsan 810 already in the form of IBM FlashSystem 810.
What came out of it - you can read below.
Configuration tested IBM FlashSystem 810:
- 13x FlashCard eMLC 480GB usable capacity each, 12 - for data, 1 - active spare ( the test configuration is
different from the maximum)
- Host interfaces 4x8Gb FC
- Full set of licenses
it looks all like this
-

In SPC-1 at the time of writing, there are no results IBM FlashSystem 810. A similar result is official (but fully
fault-tolerant) IBM FlashSystem 820 в SPC-1– ~195K iops with Average Response Time 1,29ms.
Carry out tests to determine the achievable performance indicators for IBM FlashSystem 810 in this configuration.
And repeat them for situations where IBM FlashSystem 810 virtualized by IBM SVC, to determine - not whether SVC
in such a situation the bottleneck?
In a test environment part of IBM FlashSystem 810 was given directly (over FC-SAN) to the host, and the other part
was presented to the host through a separate pool SVC (single io-group/two node CG8). The host was a dLPAR on
IBM Power Systems 9117 -MMB with AIX 6.1 TL08 SP3 (in this release include native support for IBM FlashSystem
810 and it does not require extra ODM). From SVC and FlashSystem 810 was presented by host 16xLU,
gathered at the stripe by AIX LVM. To organize the data file system was used JFS2 with inline log and mount

options noatime, cio. Each of the filesystems (size to 2TB) was filled with data at > 80%. On IBM SVC all
16xLU were thin (IBM FlashSystem 810 has no similar technology).

.
The first test is carried out for a multi-threaded load types - 100% Random, Read/Write = 80/20, by varying
the size of the block (diagram 1, 2). IBM FlashSystem 810 connected to the host via SAN (without IBM SVC).
• block 64KB/128KB reached the maximum bandwidth limited 4x8Gbit FC interfaces.
• relatively low performance with a block size 512B and 2KB, due to the size of the default block used in the file
system JFS2 (4KB)

diagram 1 - IBM FlashSystem 810
100% Random, Read/Write= 80/20
250000

200000

IOPS

150000

100000

50000

0
FlashSystem 810

512B

2KB

4KB

8KB

16KB

32KB

64KB

128KB

54303

54347

225278

191741

167754

89818

51661

25010

diagram 2 - IBM FlashSystem 810
average latency
25

ms

20
15
10
5
0
512B

2KB

4KB

8KB

16KB

32KB

64KB

128KB

FlashSystem 810, write

512B
0.1

2KB
0.1

4KB
0.1

8KB
0.1

16KB
0.1

32KB
0.2

64KB
0.3

128KB
0.4

FlashSystem 810, read

0.2

0.2

0.3

0.5

0.8

1.5

3.5

8.2
Repeat the test for multi-threaded load types - 100% Random, Read/Write = 80/20, varying the size of the
block, but presenting IBM FlashSystem 810 to the host through the IBM SVC (diagram 3, 4). As seen in the chart number of digits less than in the previous test. The reason was more than obvious - during the test was recorded
maximum CPU utilization on both nodes SVC. Simultaneously with the reduction of number of iops increased
response time.

diagram 3 - IBM FlashSystem 810 over IBM SVC
100% Random, Read/Write= 80/20
250000

200000

IOPS

150000

100000

50000

0
SVC+FlashSystem 810

512B

2KB

4KB

8KB

16KB

32KB

64KB

128KB

44444

44423

153889

135541

111212

68236

36558

18193

diagram 4 - IBM FlashSystem 810 over IBM SVC
average latency
25

ms

20
15
10
5
0
512B

2KB

4KB

8KB

16KB

32KB

64KB

128KB

SVC+FlashSystem 810, write

512B
0.1

2KB
0.1

4KB
0.5

8KB
0.5

16KB
0.5

32KB
0.4

64KB
0.5

128KB
0.6

SVC+FlashSystem 810, read

0.3

0.3

1.1

1.2

1.8

4

10

20
Repeat the test. Leaving unchanged the workload - 100% Random, Read/Write = 80/20. But now turn off
caching on the SVC for the test volumes (Cache Mode Disabled
).
As seen in the chart below (diagram 5, 6) - it has improved a bit in the result iops, but worsened the response time
on large blocks. Again, the limiting factor was the CPU load on SVC nodes.

diagram 5 - IBM FlashSystem 810 over IBM SVC
100% Random, Read/Write= 80/20
250000

200000

iops

150000

100000

50000

0
SVC (Cache disabled)+FlashSystem 810

512B

2KB

4KB

8KB

16KB

32KB

64KB

128KB

45552

46684

191483

169890

133663

79913

40926

18821

diagram 6 - IBM FlashSystem 810 over IBM SVC
average latency
25

ms

20
15
10
5
0
512B

2KB

4KB

8KB

16KB

32KB

64KB

128KB

512B

2KB

4KB

8KB

16KB

32KB

64KB

128KB

SVC (Cache disabled)+FlashSystem 810,
write

0.1

0.1

0.2

0.2

0.2

0.5

1.2

2.2

SVC (Cache disabled)+FlashSystem 810,
read

0.3

0.3

0.6

1

1.5

3.1

9

25
Compare previous data (diagram 7, 8).

diagram 7 - iops

250000
200000
150000
100000
50000
0
512B

2KB

4KB

SVC+FlashSystem 810

8KB

16KB

32KB

64KB

SVC (Cache disabled)+FlashSystem 810

128KB

FlashSystem 810

diagram 8 - average R/W latency, ms

25
20
15
10
5
0
512B

2KB

4KB

8KB

16KB

32KB

64KB

128KB

FlashSystem 810, write

SVC (Cache disabled)+FlashSystem 810, write

SVC+FlashSystem 810, write

FlashSystem 810, read

SVC (Cache disabled)+FlashSystem 810, read

SVC+FlashSystem 810, read
Dependence in the previous tests on the CPU load of the SVC iops reflected below diagram 9.

diagram 9 - 100% Random, Read/Write= 80/20, block size=8KB
SVC (Cache disabled)+FlashSystem 810

200
150
100
50

19

37

48

59

75

76

80

80

80

80

80

SVC CPU load %
K IOPS

0
8

10

12

14

16

18

20

22

the number of concurrent streams ->

24

26

28
Results:
As demonstrated above, the performance of a single IBM FlashSystem 810 is not even in full configuration
outperforms single cluster IBM SVC. (IBM SVC + IBM FlashSystem, still faster than, for example, IBM Storwize V7000
with SSD).
IBM SVC is available for a number of upgrades, in particular - the addition of a second CPU and another 4port FC HBA. In some IBM RedBooks, using IBM SVC and IBM FlashSystem together, have a recommendation upgrade IBM SVC nodes by setting the second FC HBA. But there is no recommendation to add a second processor.
Why? After the above has been demonstrated that for this configuration is the bottleneck is CPU on SVC nodes. The
reason is quite simple - the current code/firmware SVC can use extra CPU only (and exclusively) for Real-time
Compression, because for general computing is not supported by more 4xCPU Core.

Questions?

Oleg Korol
it-expert@ukr.net
http://ua.linkedin.com/pub/oleg-korol/26/920/716

More Related Content

PPTX
Term Project Presentation (4)
PDF
XPDS16: Keeping coherency on ARM - Julien Grall, ARM
PDF
INFLOW-2014-NVM-Compression
PDF
Oracle prm dul, jvm and os
PDF
Disk IO Benchmarking in shared multi-tenant environments
PPTX
ProfessionalVMware BrownBag VCP5 Section3: Storage
PPTX
Build2016 - P470 - Using Non-volatile Memory (NVDIMM-N) as Byte-Addressable S...
PDF
Load Store Execution
Term Project Presentation (4)
XPDS16: Keeping coherency on ARM - Julien Grall, ARM
INFLOW-2014-NVM-Compression
Oracle prm dul, jvm and os
Disk IO Benchmarking in shared multi-tenant environments
ProfessionalVMware BrownBag VCP5 Section3: Storage
Build2016 - P470 - Using Non-volatile Memory (NVDIMM-N) as Byte-Addressable S...
Load Store Execution

What's hot (18)

PDF
Dual port ram
PPT
HBase at Xiaomi
PDF
Vanquishing Latency Outliers in the Lightbits LightOS Software Defined Storag...
PPT
Vmug V Sphere Storage (Rev E)
PPTX
dual-port RAM (DPRAM)
DOC
Samba Optimization and Speed Tuning f...
PPTX
OmapBootSequence
PPTX
Cpu高效编程技术
PPTX
Next-Generation Best Practices for VMware and Storage
PDF
Streaming Replication (Keynote @ PostgreSQL Conference 2009 Japan)
PPTX
Design decision nfs-versus_fc_storage v_0.3
PDF
What's New in Postgres Plus Advanced Server 9.3
 
PDF
IBM Impact 2014 AMC-1876: IBM WebSphere MQ for z/OS: Latest Features Deep Dive
PDF
PostgreSQL replication
ODP
µCLinux on Pluto 6 Project presentation
PPTX
vSphere vStorage: Troubleshooting Performance
PDF
Tashkent
PPT
Oracle RAC Presentation at Oracle Open World
Dual port ram
HBase at Xiaomi
Vanquishing Latency Outliers in the Lightbits LightOS Software Defined Storag...
Vmug V Sphere Storage (Rev E)
dual-port RAM (DPRAM)
Samba Optimization and Speed Tuning f...
OmapBootSequence
Cpu高效编程技术
Next-Generation Best Practices for VMware and Storage
Streaming Replication (Keynote @ PostgreSQL Conference 2009 Japan)
Design decision nfs-versus_fc_storage v_0.3
What's New in Postgres Plus Advanced Server 9.3
 
IBM Impact 2014 AMC-1876: IBM WebSphere MQ for z/OS: Latest Features Deep Dive
PostgreSQL replication
µCLinux on Pluto 6 Project presentation
vSphere vStorage: Troubleshooting Performance
Tashkent
Oracle RAC Presentation at Oracle Open World
Ad

Viewers also liked (20)

PDF
IBM Flash System 810 Ru
PDF
Heroes of the Floating World
PDF
IBM FlashSystem 840 eng
PPT
DNA MISMATCH REPAIR HAPPENS ONLY DURING A BRIEF WINDOW OF OPPORTUNITY AND MET...
PPT
пробне зно – важливий крок до здійснення мрії про вищу освіту
PPTX
Unleashing WebGL & WebAudio with babylon.js
PPTX
Create fun & immersive audio experiences with web audio
PDF
IBM FlashSystem 840 UA
PDF
takeaway meny
PPTX
E rate & mcisd technology plan
PPTX
Babylon.js WebGL Paris
PPTX
Creating 3D Worlds with WebGL and Babylon.js - Codemotion.es
PDF
Perf Storwize V7000 Rus
PPT
Migrate To Emc Symmetrix Vmax Rus
PDF
Test IBM Storwize V7000 with Easy Tier Rus
PPTX
E rate & mcisd technology plan
PPTX
W3 cafe ie10etwindows8
PDF
IBM Storwize V7000 Ultimate Performance Eng
PDF
International Journal of Engineering Research and Development
DOC
Lễ hội đền Hả
IBM Flash System 810 Ru
Heroes of the Floating World
IBM FlashSystem 840 eng
DNA MISMATCH REPAIR HAPPENS ONLY DURING A BRIEF WINDOW OF OPPORTUNITY AND MET...
пробне зно – важливий крок до здійснення мрії про вищу освіту
Unleashing WebGL & WebAudio with babylon.js
Create fun & immersive audio experiences with web audio
IBM FlashSystem 840 UA
takeaway meny
E rate & mcisd technology plan
Babylon.js WebGL Paris
Creating 3D Worlds with WebGL and Babylon.js - Codemotion.es
Perf Storwize V7000 Rus
Migrate To Emc Symmetrix Vmax Rus
Test IBM Storwize V7000 with Easy Tier Rus
E rate & mcisd technology plan
W3 cafe ie10etwindows8
IBM Storwize V7000 Ultimate Performance Eng
International Journal of Engineering Research and Development
Lễ hội đền Hả
Ad

Similar to IBM Flash System 810 Eng (20)

PPTX
Vstoragetamsupportday1 110311121032-phpapp02
PPTX
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
PPT
Orcl siebel-sun-s282213-oow2006
PDF
MT41 Dell EMC VMAX: Ask the Experts
PPT
Collaborate07kmohiuddin
PDF
IBM flash systems
PDF
NVMe Takes It All, SCSI Has To Fall
PPTX
SDC20 ScaleFlux.pptx
PPTX
Storage 101 for VMware admins 2015
PPT
Ibm flash tms presentation 2013 04
PDF
[db tech showcase Tokyo 2018] #dbts2018 #B17 『オラクル パフォーマンス チューニング - 神話、伝説と解決策』
PDF
Intro to Cell Broadband Engine for HPC
PDF
IBM's new Flashsystem 900
PDF
Some analysis of BlueStore and RocksDB
PPTX
Flash for the Real World – Separate Hype from Reality
PPT
HeroLympics Eng V03 Henk Vd Valk
PPTX
configurations type cloud VNX
PPT
IBM SAN Volume Controller Performance Analysis
PDF
Virtualization with Lenovo X6 Blade Servers: white paper
PPTX
Storage and performance- Batch processing, Whiptail
Vstoragetamsupportday1 110311121032-phpapp02
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
Orcl siebel-sun-s282213-oow2006
MT41 Dell EMC VMAX: Ask the Experts
Collaborate07kmohiuddin
IBM flash systems
NVMe Takes It All, SCSI Has To Fall
SDC20 ScaleFlux.pptx
Storage 101 for VMware admins 2015
Ibm flash tms presentation 2013 04
[db tech showcase Tokyo 2018] #dbts2018 #B17 『オラクル パフォーマンス チューニング - 神話、伝説と解決策』
Intro to Cell Broadband Engine for HPC
IBM's new Flashsystem 900
Some analysis of BlueStore and RocksDB
Flash for the Real World – Separate Hype from Reality
HeroLympics Eng V03 Henk Vd Valk
configurations type cloud VNX
IBM SAN Volume Controller Performance Analysis
Virtualization with Lenovo X6 Blade Servers: white paper
Storage and performance- Batch processing, Whiptail

More from Oleg Korol (10)

PDF
SMT and Linux on POWER8 ukr
PDF
HPE 3PAR OS 3.2.2 MU3 and Adaptive Optimization
PDF
SPEC CPU2006 Rate
PDF
IDC Tracker for non x86 servers Ru
PDF
IBM Storwize V7000 Ultimate Performance Ru
PDF
Virtual Ethernet On Power Eng
PDF
Virtual Ethernet On Power Rus
PDF
Perf EMC VNX5100 vs IBM DS5300 Eng
PDF
Perf EMC VNX5100 vs IBM DS5300 Rus
PDF
Perf Storwize V7000 Eng
SMT and Linux on POWER8 ukr
HPE 3PAR OS 3.2.2 MU3 and Adaptive Optimization
SPEC CPU2006 Rate
IDC Tracker for non x86 servers Ru
IBM Storwize V7000 Ultimate Performance Ru
Virtual Ethernet On Power Eng
Virtual Ethernet On Power Rus
Perf EMC VNX5100 vs IBM DS5300 Eng
Perf EMC VNX5100 vs IBM DS5300 Rus
Perf Storwize V7000 Eng

Recently uploaded (20)

PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Tartificialntelligence_presentation.pptx
PDF
Getting Started with Data Integration: FME Form 101
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Empathic Computing: Creating Shared Understanding
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Encapsulation theory and applications.pdf
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PPTX
Big Data Technologies - Introduction.pptx
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Machine learning based COVID-19 study performance prediction
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Approach and Philosophy of On baking technology
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Tartificialntelligence_presentation.pptx
Getting Started with Data Integration: FME Form 101
MIND Revenue Release Quarter 2 2025 Press Release
Encapsulation_ Review paper, used for researhc scholars
Per capita expenditure prediction using model stacking based on satellite ima...
Empathic Computing: Creating Shared Understanding
Building Integrated photovoltaic BIPV_UPV.pdf
Encapsulation theory and applications.pdf
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Big Data Technologies - Introduction.pptx
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Machine learning based COVID-19 study performance prediction
NewMind AI Weekly Chronicles - August'25-Week II
“AI and Expert System Decision Support & Business Intelligence Systems”
Approach and Philosophy of On baking technology
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
Advanced methodologies resolving dimensionality complications for autism neur...

IBM Flash System 810 Eng

  • 1. IBM FlashSystem 810 and IBM SVC Previously, products TMS Ramsan not officially supplied to Ukraine. Once this division was sold to IBM - and the situation has changed for me, on occasion, got TMS Ramsan 810 already in the form of IBM FlashSystem 810. What came out of it - you can read below. Configuration tested IBM FlashSystem 810: - 13x FlashCard eMLC 480GB usable capacity each, 12 - for data, 1 - active spare ( the test configuration is different from the maximum) - Host interfaces 4x8Gb FC - Full set of licenses it looks all like this
  • 2. - In SPC-1 at the time of writing, there are no results IBM FlashSystem 810. A similar result is official (but fully fault-tolerant) IBM FlashSystem 820 в SPC-1– ~195K iops with Average Response Time 1,29ms. Carry out tests to determine the achievable performance indicators for IBM FlashSystem 810 in this configuration. And repeat them for situations where IBM FlashSystem 810 virtualized by IBM SVC, to determine - not whether SVC in such a situation the bottleneck? In a test environment part of IBM FlashSystem 810 was given directly (over FC-SAN) to the host, and the other part was presented to the host through a separate pool SVC (single io-group/two node CG8). The host was a dLPAR on IBM Power Systems 9117 -MMB with AIX 6.1 TL08 SP3 (in this release include native support for IBM FlashSystem 810 and it does not require extra ODM). From SVC and FlashSystem 810 was presented by host 16xLU, gathered at the stripe by AIX LVM. To organize the data file system was used JFS2 with inline log and mount options noatime, cio. Each of the filesystems (size to 2TB) was filled with data at > 80%. On IBM SVC all 16xLU were thin (IBM FlashSystem 810 has no similar technology). .
  • 3. The first test is carried out for a multi-threaded load types - 100% Random, Read/Write = 80/20, by varying the size of the block (diagram 1, 2). IBM FlashSystem 810 connected to the host via SAN (without IBM SVC). • block 64KB/128KB reached the maximum bandwidth limited 4x8Gbit FC interfaces. • relatively low performance with a block size 512B and 2KB, due to the size of the default block used in the file system JFS2 (4KB) diagram 1 - IBM FlashSystem 810 100% Random, Read/Write= 80/20 250000 200000 IOPS 150000 100000 50000 0 FlashSystem 810 512B 2KB 4KB 8KB 16KB 32KB 64KB 128KB 54303 54347 225278 191741 167754 89818 51661 25010 diagram 2 - IBM FlashSystem 810 average latency 25 ms 20 15 10 5 0 512B 2KB 4KB 8KB 16KB 32KB 64KB 128KB FlashSystem 810, write 512B 0.1 2KB 0.1 4KB 0.1 8KB 0.1 16KB 0.1 32KB 0.2 64KB 0.3 128KB 0.4 FlashSystem 810, read 0.2 0.2 0.3 0.5 0.8 1.5 3.5 8.2
  • 4. Repeat the test for multi-threaded load types - 100% Random, Read/Write = 80/20, varying the size of the block, but presenting IBM FlashSystem 810 to the host through the IBM SVC (diagram 3, 4). As seen in the chart number of digits less than in the previous test. The reason was more than obvious - during the test was recorded maximum CPU utilization on both nodes SVC. Simultaneously with the reduction of number of iops increased response time. diagram 3 - IBM FlashSystem 810 over IBM SVC 100% Random, Read/Write= 80/20 250000 200000 IOPS 150000 100000 50000 0 SVC+FlashSystem 810 512B 2KB 4KB 8KB 16KB 32KB 64KB 128KB 44444 44423 153889 135541 111212 68236 36558 18193 diagram 4 - IBM FlashSystem 810 over IBM SVC average latency 25 ms 20 15 10 5 0 512B 2KB 4KB 8KB 16KB 32KB 64KB 128KB SVC+FlashSystem 810, write 512B 0.1 2KB 0.1 4KB 0.5 8KB 0.5 16KB 0.5 32KB 0.4 64KB 0.5 128KB 0.6 SVC+FlashSystem 810, read 0.3 0.3 1.1 1.2 1.8 4 10 20
  • 5. Repeat the test. Leaving unchanged the workload - 100% Random, Read/Write = 80/20. But now turn off caching on the SVC for the test volumes (Cache Mode Disabled ). As seen in the chart below (diagram 5, 6) - it has improved a bit in the result iops, but worsened the response time on large blocks. Again, the limiting factor was the CPU load on SVC nodes. diagram 5 - IBM FlashSystem 810 over IBM SVC 100% Random, Read/Write= 80/20 250000 200000 iops 150000 100000 50000 0 SVC (Cache disabled)+FlashSystem 810 512B 2KB 4KB 8KB 16KB 32KB 64KB 128KB 45552 46684 191483 169890 133663 79913 40926 18821 diagram 6 - IBM FlashSystem 810 over IBM SVC average latency 25 ms 20 15 10 5 0 512B 2KB 4KB 8KB 16KB 32KB 64KB 128KB 512B 2KB 4KB 8KB 16KB 32KB 64KB 128KB SVC (Cache disabled)+FlashSystem 810, write 0.1 0.1 0.2 0.2 0.2 0.5 1.2 2.2 SVC (Cache disabled)+FlashSystem 810, read 0.3 0.3 0.6 1 1.5 3.1 9 25
  • 6. Compare previous data (diagram 7, 8). diagram 7 - iops 250000 200000 150000 100000 50000 0 512B 2KB 4KB SVC+FlashSystem 810 8KB 16KB 32KB 64KB SVC (Cache disabled)+FlashSystem 810 128KB FlashSystem 810 diagram 8 - average R/W latency, ms 25 20 15 10 5 0 512B 2KB 4KB 8KB 16KB 32KB 64KB 128KB FlashSystem 810, write SVC (Cache disabled)+FlashSystem 810, write SVC+FlashSystem 810, write FlashSystem 810, read SVC (Cache disabled)+FlashSystem 810, read SVC+FlashSystem 810, read
  • 7. Dependence in the previous tests on the CPU load of the SVC iops reflected below diagram 9. diagram 9 - 100% Random, Read/Write= 80/20, block size=8KB SVC (Cache disabled)+FlashSystem 810 200 150 100 50 19 37 48 59 75 76 80 80 80 80 80 SVC CPU load % K IOPS 0 8 10 12 14 16 18 20 22 the number of concurrent streams -> 24 26 28
  • 8. Results: As demonstrated above, the performance of a single IBM FlashSystem 810 is not even in full configuration outperforms single cluster IBM SVC. (IBM SVC + IBM FlashSystem, still faster than, for example, IBM Storwize V7000 with SSD). IBM SVC is available for a number of upgrades, in particular - the addition of a second CPU and another 4port FC HBA. In some IBM RedBooks, using IBM SVC and IBM FlashSystem together, have a recommendation upgrade IBM SVC nodes by setting the second FC HBA. But there is no recommendation to add a second processor. Why? After the above has been demonstrated that for this configuration is the bottleneck is CPU on SVC nodes. The reason is quite simple - the current code/firmware SVC can use extra CPU only (and exclusively) for Real-time Compression, because for general computing is not supported by more 4xCPU Core. Questions? Oleg Korol it-expert@ukr.net http://ua.linkedin.com/pub/oleg-korol/26/920/716