SlideShare a Scribd company logo
Scalable Storage Configuration for the
Physics Database Services
Luca Canali, CERN IT
LCG Database Deployment and
Persistency Workshop
October, 2005
Database Workshop, October 2005 L.Canali, CERN 2
Outline
• In this talk I will discuss
– Storage configuration for scalable database services
• Main challenges
• Best practices
– An implementation of scalable storage
• Impacts on DB logical to physical mapping
• Performance and resource allocations
– How we can help you to size new database projects or
to scale up existing applications
• Performance testing
• Benchmark data
Database Workshop, October 2005 L.Canali, CERN 3
Oracle Scalable Architecture
Goal: A database infrastructure that provides the required system
resources to the end-users and applications.
How: A modular architecture that can scale up to a large number of
components
Database Workshop, October 2005 L.Canali, CERN 4
RAID 1: HA Storage
• Mirroring
– 2-Way mirroring (RAID 1) protects against single point
of failures
– Can be used to redistribute I/O load (performance)
Database Workshop, October 2005 L.Canali, CERN 5
RAID 0: Scalable Performances
• RAID 0 (Striping) automatically redistributes files across
multiple disks.
• Performance and scalability are increased
• Error resiliency is decreased
Unstriped Disks Striped Disks
Database Workshop, October 2005 L.Canali, CERN 6
Mechanical and Geometrical constraints
• The external part of the disk provides
– More throughput
– Less latency
Database Workshop, October 2005 L.Canali, CERN 7
S.A.M.E Strategy
• Goal: optimize storage I/O utilization
• S.A.M.E. (Stripe And Mirror Everything) Strategy
– Built on the concepts of RAID 1 + 0
– Proposed by J. Loaiza (Oracle) in 1999
– Replaces “old recipes”: manual balancing across
volumes
• Need a Software or Hardware Volume Manager
– ASM is Oracle’s solution with 10g “S.A.M.E. out of the
box”
– Other solutions available from different vendors
require configuration
Database Workshop, October 2005 L.Canali, CERN 8
Storage Configuration Guidelines
• Use all available disk drives
• Place frequently used data at outer half of disk
– Fastest transfer rate
– Minimize seek time
• Stripe data at 1MB extents
– Distribute the workload across disks
– Eliminate hot spots
– Optimum sequential bandwidth gained with1MB I/O
• Stripe redo logs across multiple drives
– Maximize write throughput for small writes
– Smaller stripe size (128KB) and/or dedicated disks
• Use cache on the controller
– ‘Write-back’ cache
– Battery-backed cache
Database Workshop, October 2005 L.Canali, CERN 9
Oracle’s ASM Main Features
• Mirror protection:
– 2-way and 3-way mirroring available.
– Mirror on a per-file basis
– Can mirror across storage arrays
• Data striping across the volume:
– 1MB and 128KB stripes available
• Supports clustering and single instance
• Dynamic data distribution
– A solution to avoid ‘hot spots’
– On-line add/drop disk with minimal data relocation
– Automatic database file management
• Database File System with performance of RAW I/O
Database Workshop, October 2005 L.Canali, CERN 10
ASM’s Configuration – Examples
• ASM is a volume manager, its output are disk groups (DG) that Oracle
databases can mount to allocate their files
RECOVERY-DG
DATA-DG
Config 1: Disk groups created
with dedicated disks
Config 2: Disk groups created
by ‘horizontal’ slicing
Database Workshop, October 2005 L.Canali, CERN 11
Proposed Storage Configuration
• Proposed storage configuration:
– High availability - Allows backups to disk
– High performance - Allows clusterware mirroring (10.2)
– DBs have dedicated resources
Storage
Arrays
Oracle RAC
Nodes
Data DG-2 Recovery DG-1
Data DG-1 Recovery DG-2 Disk Groups
(ASM)
DB N.1 DB N.2
Database Workshop, October 2005 L.Canali, CERN 12
FAQ 1: Datafiles
• Do I need to worry on the number and names of
the datafiles allocated for each tablespace?
• “Traditional” storage allocation across multiple volumes:
– Requires a careful allocation of multiple datafiles across logical
volumes and/or filesystems
– Datafile-to-filesystem and filesystem-to-physical storage mappings
have to be frequently tuned
• S.A.M.E. storage, such as Oracle ASM, provides balanced I/O access
across disks
– There is NO NEED, for performance reasons, to allocate multiple
datafiles per tablespace.
– 10g new feature “bigfile tablespace” allows for tablespaces with a
single datafile that can grow up to 32 TB (db_block_size=8k)
Database Workshop, October 2005 L.Canali, CERN 13
FAQ 2: Data and Index Tablespaces
• Do I need dedicated tablespaces for indexes and
tables?
• Separation of indexes and tables has often been advised to:
– Distribute I/O
– Reduce fragmentation
– Allow separate backup of tables and indexes
• S.A.M.E. storage, such as Oracle ASM, provides balanced I/O access
across disks
– No performance gains are expected by using dedicated
tablespaces for INDEXes and TABLEs.
• Additional Notes:
– Tablespaces fragmentation has little impact when using locally managed
TBSs and automatic segment space management (9i and 10g)
– Very large database can profit from using multiple tablespaces for admin
purposes and logical separation of objects
Database Workshop, October 2005 L.Canali, CERN 14
FAQ 3: Sizing Storage
• Storage sizing for database should not take size as the
first requirement
– Bandwidth and performance metrics are bound to the number of
disk spindles
– Magnetic HD technology has improved the GByte/$ ratio
– The rest of HD technology has not seen much improvements in
the last 5 years (since 15K rpms HDs)
• Sizing for storage requirements should
– Be based on stress test measurements
– Past performance measurements on comparable systems
– New projects can leverage benchmark data.
• Extra HD space is not wasted
– Can be used to strengthen the B&R policy with Disk Backups
Database Workshop, October 2005 L.Canali, CERN 15
IO Benchmark Measurements
• Benchmark data can be used for
– Sizing of new projects and upgrades
– Performance baseline, testing for new hardware
• The following metrics have been measured:
– Sequential throughput (full scans)
– Random access (indexed access)
– I/O per second (indexed access)
– Metrics are measured as a function of workload
• Other test details
– Benchmark tool: Oracle’s ORION
– Infortrend Storage array: 16 SATA x 400 GB disks, 1
controller and 1 GB cache
Database Workshop, October 2005 L.Canali, CERN 16
IO Benchmark Data
Sequential Workload Performance
(Read Only Workload)
0
20
40
60
80
100
120
140
160
1 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
Load (N# Outstanding asynch I/Os)
Throughput
(MBps)
12 LUNs, noraid
16 LUNs, no raid,
external partition
Throughput:
+ 25%
Database Workshop, October 2005 L.Canali, CERN 17
IO Benchmark Data
Latency for Small Random I/O
(Read Only Workload)
0
20
40
60
80
100
120
140
160
1 2 3 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80
Load (N# Outstanding asynch I/Os)
Latency
(ms)
12 LUNs, noraid
16 LUNs, no raid,
external partition
Latency:
- 35%
Database Workshop, October 2005 L.Canali, CERN 18
IO Benchmark Data
Small Random I/O Performance
(Read Only Workload)
0
100
200
300
400
500
600
700
800
900
1000
1 4 16 28 40 52 64 76
Load (N# Outstanding asynch I/Os)
I/O
per
second
12 LUNs, noraid
16 LUNs, no raid,
external partition
I/O /sec:
+60%
Database Workshop, October 2005 L.Canali, CERN 19
Conclusions
• The Database Services for Physics can provide
– Scalable Database services
• Scalable on CPU and Memory resources
• Scalable on Storage resources
– Sizing for new projects or upgrades
• Stress/Performance testing
• Integration and Validation Testing
• Benchmark data for capacity planning
Database Workshop, October 2005 L.Canali, CERN 20
Additional Benchmark Data
Sequential Workload Performance
(Read Only Workload)
0
20
40
60
80
100
120
140
1 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
Load (N# Outstanding asynch I/Os)
Throughput
(MBps)
7 LUNs, raid0
14 LUNs, no raid
14 LUNs, no raid, outer
partition used
14 LUN, no raid, inner
partition used
Database Workshop, October 2005 L.Canali, CERN 21
Additional Benchmark Data
Latency for Small Random I/O
(Read Only Workload)
0
20
40
60
80
100
120
140
1 2 3 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80
Load (N# Outstanding asynch I/Os)
Latency
(ms)
7 LUNs, raid0
14 LUNs, no raid
14 LUNs, no raid, outer
partition used
14 LUNS, no raid, inner
partition used
Database Workshop, October 2005 L.Canali, CERN 22
Additional Benchmark Data
Small Random I/O Performance
(Read Only Workload)
0
100
200
300
400
500
600
700
800
900
1 4 16 28 40 52 64 76
Load (N# Outstanding asynch I/Os)
I/O
per
second
7 LUNs, raid0
14 LUNs, no raid
14 LUNs, noraid, outer
partition used
14 LUNS, noraid, inner
partition used

More Related Content

PPT
Oracle real application_cluster
PPT
WLCG_Oracle_perf_for_admin_Luca_Nov07.ppt
PPT
WLCG_Oracle_perf_for_admin_Luca_Nov07.ppt
PDF
Workshop para diseño de Lustre para sistemas HPC
PDF
Revolutionary Storage for Modern Databases, Applications and Infrastrcture
PDF
Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...
PPTX
History of Oracle and Databases
PPT
Hptf 2240 Final
Oracle real application_cluster
WLCG_Oracle_perf_for_admin_Luca_Nov07.ppt
WLCG_Oracle_perf_for_admin_Luca_Nov07.ppt
Workshop para diseño de Lustre para sistemas HPC
Revolutionary Storage for Modern Databases, Applications and Infrastrcture
Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...
History of Oracle and Databases
Hptf 2240 Final

Similar to Scalable Storage Configuration for the Physics Database Services (20)

PDF
SF Big Analytics & SF Machine Learning Meetup: Machine Learning at the Limit ...
PDF
Accelerating HBase with NVMe and Bucket Cache
PPTX
Oracle-12c Online Training by Quontra Solutions
PPTX
Ceph: Low Fail Go Scale
PPTX
Oracle 11g data warehouse introdution
PPT
PDF
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
PDF
Building a High Performance Analytics Platform
PDF
The state of SQL-on-Hadoop in the Cloud
PPTX
QCT Ceph Solution - Design Consideration and Reference Architecture
PPTX
QCT Ceph Solution - Design Consideration and Reference Architecture
PPTX
NSCC Training Introductory Class
PDF
Spark Summit EU talk by Berni Schiefer
PDF
DBA 101 : Calling all New Database Administrators (PPT)
PPTX
OOW-IMC-final
PPTX
Cost Effectively Run Multiple Oracle Database Copies at Scale
PPTX
Ceph Community Talk on High-Performance Solid Sate Ceph
PPT
Frb Briefing Database
PDF
Apache Spark for RDBMS Practitioners: How I Learned to Stop Worrying and Lov...
PDF
Hadoop 3 @ Hadoop Summit San Jose 2017
SF Big Analytics & SF Machine Learning Meetup: Machine Learning at the Limit ...
Accelerating HBase with NVMe and Bucket Cache
Oracle-12c Online Training by Quontra Solutions
Ceph: Low Fail Go Scale
Oracle 11g data warehouse introdution
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
Building a High Performance Analytics Platform
The state of SQL-on-Hadoop in the Cloud
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
NSCC Training Introductory Class
Spark Summit EU talk by Berni Schiefer
DBA 101 : Calling all New Database Administrators (PPT)
OOW-IMC-final
Cost Effectively Run Multiple Oracle Database Copies at Scale
Ceph Community Talk on High-Performance Solid Sate Ceph
Frb Briefing Database
Apache Spark for RDBMS Practitioners: How I Learned to Stop Worrying and Lov...
Hadoop 3 @ Hadoop Summit San Jose 2017
Ad

Recently uploaded (20)

PDF
WRN_Investor_Presentation_August 2025.pdf
PPT
Chapter four Project-Preparation material
PPT
Data mining for business intelligence ch04 sharda
DOCX
Business Management - unit 1 and 2
PDF
Solara Labs: Empowering Health through Innovative Nutraceutical Solutions
PPTX
Amazon (Business Studies) management studies
PDF
Ôn tập tiếng anh trong kinh doanh nâng cao
PDF
A Brief Introduction About Julia Allison
DOCX
Euro SEO Services 1st 3 General Updates.docx
PPTX
CkgxkgxydkydyldylydlydyldlyddolydyoyyU2.pptx
PDF
Nidhal Samdaie CV - International Business Consultant
PPTX
ICG2025_ICG 6th steering committee 30-8-24.pptx
PPTX
The Marketing Journey - Tracey Phillips - Marketing Matters 7-2025.pptx
PDF
Training And Development of Employee .pdf
PDF
Elevate Cleaning Efficiency Using Tallfly Hair Remover Roller Factory Expertise
PDF
Power and position in leadershipDOC-20250808-WA0011..pdf
PDF
Types of control:Qualitative vs Quantitative
PDF
Roadmap Map-digital Banking feature MB,IB,AB
PDF
Katrina Stoneking: Shaking Up the Alcohol Beverage Industry
PDF
Chapter 5_Foreign Exchange Market in .pdf
WRN_Investor_Presentation_August 2025.pdf
Chapter four Project-Preparation material
Data mining for business intelligence ch04 sharda
Business Management - unit 1 and 2
Solara Labs: Empowering Health through Innovative Nutraceutical Solutions
Amazon (Business Studies) management studies
Ôn tập tiếng anh trong kinh doanh nâng cao
A Brief Introduction About Julia Allison
Euro SEO Services 1st 3 General Updates.docx
CkgxkgxydkydyldylydlydyldlyddolydyoyyU2.pptx
Nidhal Samdaie CV - International Business Consultant
ICG2025_ICG 6th steering committee 30-8-24.pptx
The Marketing Journey - Tracey Phillips - Marketing Matters 7-2025.pptx
Training And Development of Employee .pdf
Elevate Cleaning Efficiency Using Tallfly Hair Remover Roller Factory Expertise
Power and position in leadershipDOC-20250808-WA0011..pdf
Types of control:Qualitative vs Quantitative
Roadmap Map-digital Banking feature MB,IB,AB
Katrina Stoneking: Shaking Up the Alcohol Beverage Industry
Chapter 5_Foreign Exchange Market in .pdf
Ad

Scalable Storage Configuration for the Physics Database Services

  • 1. Scalable Storage Configuration for the Physics Database Services Luca Canali, CERN IT LCG Database Deployment and Persistency Workshop October, 2005
  • 2. Database Workshop, October 2005 L.Canali, CERN 2 Outline • In this talk I will discuss – Storage configuration for scalable database services • Main challenges • Best practices – An implementation of scalable storage • Impacts on DB logical to physical mapping • Performance and resource allocations – How we can help you to size new database projects or to scale up existing applications • Performance testing • Benchmark data
  • 3. Database Workshop, October 2005 L.Canali, CERN 3 Oracle Scalable Architecture Goal: A database infrastructure that provides the required system resources to the end-users and applications. How: A modular architecture that can scale up to a large number of components
  • 4. Database Workshop, October 2005 L.Canali, CERN 4 RAID 1: HA Storage • Mirroring – 2-Way mirroring (RAID 1) protects against single point of failures – Can be used to redistribute I/O load (performance)
  • 5. Database Workshop, October 2005 L.Canali, CERN 5 RAID 0: Scalable Performances • RAID 0 (Striping) automatically redistributes files across multiple disks. • Performance and scalability are increased • Error resiliency is decreased Unstriped Disks Striped Disks
  • 6. Database Workshop, October 2005 L.Canali, CERN 6 Mechanical and Geometrical constraints • The external part of the disk provides – More throughput – Less latency
  • 7. Database Workshop, October 2005 L.Canali, CERN 7 S.A.M.E Strategy • Goal: optimize storage I/O utilization • S.A.M.E. (Stripe And Mirror Everything) Strategy – Built on the concepts of RAID 1 + 0 – Proposed by J. Loaiza (Oracle) in 1999 – Replaces “old recipes”: manual balancing across volumes • Need a Software or Hardware Volume Manager – ASM is Oracle’s solution with 10g “S.A.M.E. out of the box” – Other solutions available from different vendors require configuration
  • 8. Database Workshop, October 2005 L.Canali, CERN 8 Storage Configuration Guidelines • Use all available disk drives • Place frequently used data at outer half of disk – Fastest transfer rate – Minimize seek time • Stripe data at 1MB extents – Distribute the workload across disks – Eliminate hot spots – Optimum sequential bandwidth gained with1MB I/O • Stripe redo logs across multiple drives – Maximize write throughput for small writes – Smaller stripe size (128KB) and/or dedicated disks • Use cache on the controller – ‘Write-back’ cache – Battery-backed cache
  • 9. Database Workshop, October 2005 L.Canali, CERN 9 Oracle’s ASM Main Features • Mirror protection: – 2-way and 3-way mirroring available. – Mirror on a per-file basis – Can mirror across storage arrays • Data striping across the volume: – 1MB and 128KB stripes available • Supports clustering and single instance • Dynamic data distribution – A solution to avoid ‘hot spots’ – On-line add/drop disk with minimal data relocation – Automatic database file management • Database File System with performance of RAW I/O
  • 10. Database Workshop, October 2005 L.Canali, CERN 10 ASM’s Configuration – Examples • ASM is a volume manager, its output are disk groups (DG) that Oracle databases can mount to allocate their files RECOVERY-DG DATA-DG Config 1: Disk groups created with dedicated disks Config 2: Disk groups created by ‘horizontal’ slicing
  • 11. Database Workshop, October 2005 L.Canali, CERN 11 Proposed Storage Configuration • Proposed storage configuration: – High availability - Allows backups to disk – High performance - Allows clusterware mirroring (10.2) – DBs have dedicated resources Storage Arrays Oracle RAC Nodes Data DG-2 Recovery DG-1 Data DG-1 Recovery DG-2 Disk Groups (ASM) DB N.1 DB N.2
  • 12. Database Workshop, October 2005 L.Canali, CERN 12 FAQ 1: Datafiles • Do I need to worry on the number and names of the datafiles allocated for each tablespace? • “Traditional” storage allocation across multiple volumes: – Requires a careful allocation of multiple datafiles across logical volumes and/or filesystems – Datafile-to-filesystem and filesystem-to-physical storage mappings have to be frequently tuned • S.A.M.E. storage, such as Oracle ASM, provides balanced I/O access across disks – There is NO NEED, for performance reasons, to allocate multiple datafiles per tablespace. – 10g new feature “bigfile tablespace” allows for tablespaces with a single datafile that can grow up to 32 TB (db_block_size=8k)
  • 13. Database Workshop, October 2005 L.Canali, CERN 13 FAQ 2: Data and Index Tablespaces • Do I need dedicated tablespaces for indexes and tables? • Separation of indexes and tables has often been advised to: – Distribute I/O – Reduce fragmentation – Allow separate backup of tables and indexes • S.A.M.E. storage, such as Oracle ASM, provides balanced I/O access across disks – No performance gains are expected by using dedicated tablespaces for INDEXes and TABLEs. • Additional Notes: – Tablespaces fragmentation has little impact when using locally managed TBSs and automatic segment space management (9i and 10g) – Very large database can profit from using multiple tablespaces for admin purposes and logical separation of objects
  • 14. Database Workshop, October 2005 L.Canali, CERN 14 FAQ 3: Sizing Storage • Storage sizing for database should not take size as the first requirement – Bandwidth and performance metrics are bound to the number of disk spindles – Magnetic HD technology has improved the GByte/$ ratio – The rest of HD technology has not seen much improvements in the last 5 years (since 15K rpms HDs) • Sizing for storage requirements should – Be based on stress test measurements – Past performance measurements on comparable systems – New projects can leverage benchmark data. • Extra HD space is not wasted – Can be used to strengthen the B&R policy with Disk Backups
  • 15. Database Workshop, October 2005 L.Canali, CERN 15 IO Benchmark Measurements • Benchmark data can be used for – Sizing of new projects and upgrades – Performance baseline, testing for new hardware • The following metrics have been measured: – Sequential throughput (full scans) – Random access (indexed access) – I/O per second (indexed access) – Metrics are measured as a function of workload • Other test details – Benchmark tool: Oracle’s ORION – Infortrend Storage array: 16 SATA x 400 GB disks, 1 controller and 1 GB cache
  • 16. Database Workshop, October 2005 L.Canali, CERN 16 IO Benchmark Data Sequential Workload Performance (Read Only Workload) 0 20 40 60 80 100 120 140 160 1 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 Load (N# Outstanding asynch I/Os) Throughput (MBps) 12 LUNs, noraid 16 LUNs, no raid, external partition Throughput: + 25%
  • 17. Database Workshop, October 2005 L.Canali, CERN 17 IO Benchmark Data Latency for Small Random I/O (Read Only Workload) 0 20 40 60 80 100 120 140 160 1 2 3 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 Load (N# Outstanding asynch I/Os) Latency (ms) 12 LUNs, noraid 16 LUNs, no raid, external partition Latency: - 35%
  • 18. Database Workshop, October 2005 L.Canali, CERN 18 IO Benchmark Data Small Random I/O Performance (Read Only Workload) 0 100 200 300 400 500 600 700 800 900 1000 1 4 16 28 40 52 64 76 Load (N# Outstanding asynch I/Os) I/O per second 12 LUNs, noraid 16 LUNs, no raid, external partition I/O /sec: +60%
  • 19. Database Workshop, October 2005 L.Canali, CERN 19 Conclusions • The Database Services for Physics can provide – Scalable Database services • Scalable on CPU and Memory resources • Scalable on Storage resources – Sizing for new projects or upgrades • Stress/Performance testing • Integration and Validation Testing • Benchmark data for capacity planning
  • 20. Database Workshop, October 2005 L.Canali, CERN 20 Additional Benchmark Data Sequential Workload Performance (Read Only Workload) 0 20 40 60 80 100 120 140 1 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 Load (N# Outstanding asynch I/Os) Throughput (MBps) 7 LUNs, raid0 14 LUNs, no raid 14 LUNs, no raid, outer partition used 14 LUN, no raid, inner partition used
  • 21. Database Workshop, October 2005 L.Canali, CERN 21 Additional Benchmark Data Latency for Small Random I/O (Read Only Workload) 0 20 40 60 80 100 120 140 1 2 3 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 Load (N# Outstanding asynch I/Os) Latency (ms) 7 LUNs, raid0 14 LUNs, no raid 14 LUNs, no raid, outer partition used 14 LUNS, no raid, inner partition used
  • 22. Database Workshop, October 2005 L.Canali, CERN 22 Additional Benchmark Data Small Random I/O Performance (Read Only Workload) 0 100 200 300 400 500 600 700 800 900 1 4 16 28 40 52 64 76 Load (N# Outstanding asynch I/Os) I/O per second 7 LUNs, raid0 14 LUNs, no raid 14 LUNs, noraid, outer partition used 14 LUNS, noraid, inner partition used