The Rise of Open Storage
The Rise of Open Storage…and the benefits of storage virtualisation2
AgendaEarly StorageStorage Arrays  - the last 20 yearsThe move to standardisedhardwareIt’s all about the softwareParallels with Server VirtualisationStorage virtualisation and hardware independenceFuture speculation3
Early StoragePioneered by IBMIBM 350 Disk Storage UnitReleased in 19561.52m x 1.72m x 0.74m50 magnetic disks5MB capacity600ms access timeIBM 350 Disk Storage UnitImage courtesy of IBM Archives4
Early Storage“Winchester” DrivesNamed after 30-30 rifleReleased in 1973Smaller & lighter70MB capacity25ms access timeIBM 3340 DASF(courtesy of IBM archives)5
Early StorageLarge, monolithic “refrigerator” units No hardware recoverySlow & expensiveCumbersome CKD formatEach LUN/volume still a physical diskIBM 3380 Model CJ2(courtesy of IBM archives)6
Early StorageSeagate ST-506Released in 19805MB Capacity5.25” form factorNo onboard controllerAdopted for IBM PCOver 24 years, for the same capacity, drive sizes reduced by 800 times7
Early StorageDisk Drives Today3TB+ CapacityIntegrated controllersSmall Form Factor (2.5”)6Gb/s interfacesVery high reliabilityLow cost per GBDrives are now Commodity Components8
Early Storage50 years of development……from cargo to pocket!9
Storage – The Last 20 YearsEMC set the standardSymmetrix released 1990Integrated Cache Disk ArrayDedicated hardware componentsRAID-1Replication (SRDF in 1994)Support for non-mainframe (1995)10
Storage – The Last 20 YearsIntegrated storage arrays separated control and management from the hostCustom hardware designMore functionality pushed to the arrayCache I/OReplicationSnapshotsLogical LUNs11
Storage – The Last 20 Years Rapid development in featuresMany vendors – IBM, Hitachi, HPNew product categoriesMidrange/modular (e.g. CLARiiON)NFS Appliances – FilersDe-duplication devices12
Storage – The Last 20 Years Storage has CentralisedStorage Area NetworksStarted with ESCON & SCSIFibre Channel (1997 onwards)NAS (early 1990s)iSCSI (1999 onwards)FCoE13
The Move to StandarisationHardware components have become more reliableMore features moved into softwareRAIDReplicationSome bespoke features remaining in silicon3PAR dedicated ASICHitachi VSP virtual processors14
The Move to StandardisationReduced CostCheaper componentsNo custom designReusable by generationHigher Margins15
The Move to StandardisationNew breed of productsEMC VMAXHitachi VSPHP P9500New CompaniesCompellent3PARLefthandEquallogicIsilonIBRIXIt’s no surprise that these companies have been acquired for their software assets16
It’s all About SoftwareStorage arrays look like serversCommon componentsGeneric physical layerIndependence from hardware allows:Reduced costDesign hardware to meet requirementsQuicker to market with new hardwareMore scalabilityQuicker/Easier upgrade pathDeliver new features without hardware upgrade17
It’s all About SoftwareMany vendors have produced VSAsNetapp – simulator (not strictly a VSA), Lefthand/HP, Gluster, Falconstor, Openfiler, OPEN-E, StorMagic, NexentaStor, Sun Amber RoadMost of these run exactly the same codebase as the physical storage deviceAs long as reliability & availability are met, then the hardware is no longer significant18
Parallels with Server VirtualisationServer virtualisation was successful due to power of Intel processors & LinuxEnabled x86 work to be used for Windows and Open SystemsWindows platform almost needs 1 server per application, forcing consolidationWave 1 server virtualisation reduced costs, improved hardware utilisation – the consolidation phase.Wave 2 implemented mobility features; vMotion, Storage vMotion, HA, DRS.19
Parallels with Server Virtualisation Virtualisation enables disparate operating systems to be supported on the same hardwareWorkload can be balanced to meet demandHardware can be added/removed non-disruptively – transparent upgradeServer virtualisation has enabled high scalability20
Storage Virtualisation & Hardware IndependenceVSAs show closely coupled hardware/software is no longer requiredSoftware can be developed and released independentlyFeature release not dependent on hardware21
Storage Virtualisation & Hardware IndependenceHardware can be designed to meet performance, availability & throughput, leveraging server hardware developmentBranches with smaller hardwareCore data centres with bigger arraysBoth using same features/functionality22
Future Speculation LUN virtualisation rather than array virtualisation is the key to futureLUNs must be individually addressableAbility to move a LUN between physical infrastructuresLUN owned/managed by an arrayTransparent migration, failoverIncreased availabilityDelivers data mobility – an absolute requirement as data quantities increase (especially PB+ arrays)23
Future Speculation New addressing schema necessaryRemove restrictions of Fibre ChannelAddress LUN independently of physical locationAllow LUN to move around infrastructureAllow LUN to be addressed through multiple locationsMore granular sub-object accessBetter load balancingBetter mobility24
Future SpeculationVMFS are LUNs – which are binary objectsVMFS divides into VMDKs for independent accessVirtual machine becomes the object to move around the infrastructureSub-LUN access and locking enables read & write everywhere approachStorage and Virtualisation will be inextricably linked to each other25
Questions?26

More Related Content

PPTX
SolidFire
PPT
High Availability and Xen
PPTX
Barracuda backup service
PPTX
Cisco UNIFIED COMPUTING SYSTEM(UCS)
ODP
Sun VDI 3.1 - Oct 2009
PPTX
Using next gen storage in Cloudstack
PPTX
Databases love nutanix
PPT
Virtualization
SolidFire
High Availability and Xen
Barracuda backup service
Cisco UNIFIED COMPUTING SYSTEM(UCS)
Sun VDI 3.1 - Oct 2009
Using next gen storage in Cloudstack
Databases love nutanix
Virtualization

What's hot (18)

PPT
Virtualization
PPT
Virtualization_TechTalk
KEY
Finadmin virtualization2
PDF
OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS Super...
PDF
VMware Dell řešení pro VDI
PDF
Nexenta Powered by Apache CloudStack from Iliyas Shirol
PDF
UniFabric
PDF
XPDDS17: How to Abstract Hardware Acceleration Device in Cloud Environment - ...
PDF
Virtualization with KVM
PDF
Got Big Data? Splunk on Nutanix
PDF
Dell EMC VxRAIL Appliance based on VMware SDS
PPTX
Nutanix vdi workshop presentation
PDF
InfiniBox z pohledu zákazníka
PPTX
JetStor portfolio update final_2020-2021
PDF
VMware EVO - Fremtidens datarom er hyperkonvergert
PPTX
UKCMG DB2 pureScale
PPTX
Implementation levels of virtualization
PDF
Xiv 4Q13
Virtualization
Virtualization_TechTalk
Finadmin virtualization2
OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS Super...
VMware Dell řešení pro VDI
Nexenta Powered by Apache CloudStack from Iliyas Shirol
UniFabric
XPDDS17: How to Abstract Hardware Acceleration Device in Cloud Environment - ...
Virtualization with KVM
Got Big Data? Splunk on Nutanix
Dell EMC VxRAIL Appliance based on VMware SDS
Nutanix vdi workshop presentation
InfiniBox z pohledu zákazníka
JetStor portfolio update final_2020-2021
VMware EVO - Fremtidens datarom er hyperkonvergert
UKCMG DB2 pureScale
Implementation levels of virtualization
Xiv 4Q13
Ad

Similar to The Rise of Open Storage (20)

PDF
TechTarget Event - Storage Architectures for the Modern Data Centre – Chris E...
PPTX
It's the End of Data Storage As We Know It (And I Feel Fine)
PDF
OSS Presentation Keynote by Evan Powell
PPTX
VMware View – Storage Considerations
PPTX
London VMUG Presentation 19th July 2012
PDF
VMware Software Defined Storage A Design Guide to the Policy Driven Software ...
PDF
Building a Distributed Block Storage System on Xen
PPTX
Storage for Virtual Environments 2011 R2
PPTX
409793049-Storage-Virtualization-pptx.pptx
PDF
SANsymphony V
PPT
Deconstructing the brian paradox
PPT
DataCore Solutions Overview
PDF
Kinetic basho public
PDF
DataCore Software - The one and only Storage Hypervisor
PPTX
Mod-Storage Area Networks design scenarios
PDF
Hu Yoshida - Storage Trends and Directions (Storage Expo 2010)
PPTX
Rearchitecting Storage for Server Virtualization
PDF
Private cloud virtual reality to reality a partner story daniel mar_technicom
PPT
VDI storage and storage virtualization
PPTX
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
TechTarget Event - Storage Architectures for the Modern Data Centre – Chris E...
It's the End of Data Storage As We Know It (And I Feel Fine)
OSS Presentation Keynote by Evan Powell
VMware View – Storage Considerations
London VMUG Presentation 19th July 2012
VMware Software Defined Storage A Design Guide to the Policy Driven Software ...
Building a Distributed Block Storage System on Xen
Storage for Virtual Environments 2011 R2
409793049-Storage-Virtualization-pptx.pptx
SANsymphony V
Deconstructing the brian paradox
DataCore Solutions Overview
Kinetic basho public
DataCore Software - The one and only Storage Hypervisor
Mod-Storage Area Networks design scenarios
Hu Yoshida - Storage Trends and Directions (Storage Expo 2010)
Rearchitecting Storage for Server Virtualization
Private cloud virtual reality to reality a partner story daniel mar_technicom
VDI storage and storage virtualization
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
Ad

Recently uploaded (20)

PDF
Developing a website for English-speaking practice to English as a foreign la...
PDF
Enhancing emotion recognition model for a student engagement use case through...
PDF
OpenACC and Open Hackathons Monthly Highlights July 2025
PDF
Two-dimensional Klein-Gordon and Sine-Gordon numerical solutions based on dee...
PPTX
Microsoft Excel 365/2024 Beginner's training
PPTX
2018-HIPAA-Renewal-Training for executives
PPT
What is a Computer? Input Devices /output devices
PDF
Flame analysis and combustion estimation using large language and vision assi...
PDF
The influence of sentiment analysis in enhancing early warning system model f...
PDF
A proposed approach for plagiarism detection in Myanmar Unicode text
PDF
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
PDF
sustainability-14-14877-v2.pddhzftheheeeee
PPTX
Benefits of Physical activity for teenagers.pptx
PPT
Geologic Time for studying geology for geologist
PDF
CloudStack 4.21: First Look Webinar slides
PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
PDF
UiPath Agentic Automation session 1: RPA to Agents
PPTX
Configure Apache Mutual Authentication
PDF
Convolutional neural network based encoder-decoder for efficient real-time ob...
Developing a website for English-speaking practice to English as a foreign la...
Enhancing emotion recognition model for a student engagement use case through...
OpenACC and Open Hackathons Monthly Highlights July 2025
Two-dimensional Klein-Gordon and Sine-Gordon numerical solutions based on dee...
Microsoft Excel 365/2024 Beginner's training
2018-HIPAA-Renewal-Training for executives
What is a Computer? Input Devices /output devices
Flame analysis and combustion estimation using large language and vision assi...
The influence of sentiment analysis in enhancing early warning system model f...
A proposed approach for plagiarism detection in Myanmar Unicode text
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
sustainability-14-14877-v2.pddhzftheheeeee
Benefits of Physical activity for teenagers.pptx
Geologic Time for studying geology for geologist
CloudStack 4.21: First Look Webinar slides
Module 1.ppt Iot fundamentals and Architecture
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
UiPath Agentic Automation session 1: RPA to Agents
Configure Apache Mutual Authentication
Convolutional neural network based encoder-decoder for efficient real-time ob...

The Rise of Open Storage

  • 2. The Rise of Open Storage…and the benefits of storage virtualisation2
  • 3. AgendaEarly StorageStorage Arrays - the last 20 yearsThe move to standardisedhardwareIt’s all about the softwareParallels with Server VirtualisationStorage virtualisation and hardware independenceFuture speculation3
  • 4. Early StoragePioneered by IBMIBM 350 Disk Storage UnitReleased in 19561.52m x 1.72m x 0.74m50 magnetic disks5MB capacity600ms access timeIBM 350 Disk Storage UnitImage courtesy of IBM Archives4
  • 5. Early Storage“Winchester” DrivesNamed after 30-30 rifleReleased in 1973Smaller & lighter70MB capacity25ms access timeIBM 3340 DASF(courtesy of IBM archives)5
  • 6. Early StorageLarge, monolithic “refrigerator” units No hardware recoverySlow & expensiveCumbersome CKD formatEach LUN/volume still a physical diskIBM 3380 Model CJ2(courtesy of IBM archives)6
  • 7. Early StorageSeagate ST-506Released in 19805MB Capacity5.25” form factorNo onboard controllerAdopted for IBM PCOver 24 years, for the same capacity, drive sizes reduced by 800 times7
  • 8. Early StorageDisk Drives Today3TB+ CapacityIntegrated controllersSmall Form Factor (2.5”)6Gb/s interfacesVery high reliabilityLow cost per GBDrives are now Commodity Components8
  • 9. Early Storage50 years of development……from cargo to pocket!9
  • 10. Storage – The Last 20 YearsEMC set the standardSymmetrix released 1990Integrated Cache Disk ArrayDedicated hardware componentsRAID-1Replication (SRDF in 1994)Support for non-mainframe (1995)10
  • 11. Storage – The Last 20 YearsIntegrated storage arrays separated control and management from the hostCustom hardware designMore functionality pushed to the arrayCache I/OReplicationSnapshotsLogical LUNs11
  • 12. Storage – The Last 20 Years Rapid development in featuresMany vendors – IBM, Hitachi, HPNew product categoriesMidrange/modular (e.g. CLARiiON)NFS Appliances – FilersDe-duplication devices12
  • 13. Storage – The Last 20 Years Storage has CentralisedStorage Area NetworksStarted with ESCON & SCSIFibre Channel (1997 onwards)NAS (early 1990s)iSCSI (1999 onwards)FCoE13
  • 14. The Move to StandarisationHardware components have become more reliableMore features moved into softwareRAIDReplicationSome bespoke features remaining in silicon3PAR dedicated ASICHitachi VSP virtual processors14
  • 15. The Move to StandardisationReduced CostCheaper componentsNo custom designReusable by generationHigher Margins15
  • 16. The Move to StandardisationNew breed of productsEMC VMAXHitachi VSPHP P9500New CompaniesCompellent3PARLefthandEquallogicIsilonIBRIXIt’s no surprise that these companies have been acquired for their software assets16
  • 17. It’s all About SoftwareStorage arrays look like serversCommon componentsGeneric physical layerIndependence from hardware allows:Reduced costDesign hardware to meet requirementsQuicker to market with new hardwareMore scalabilityQuicker/Easier upgrade pathDeliver new features without hardware upgrade17
  • 18. It’s all About SoftwareMany vendors have produced VSAsNetapp – simulator (not strictly a VSA), Lefthand/HP, Gluster, Falconstor, Openfiler, OPEN-E, StorMagic, NexentaStor, Sun Amber RoadMost of these run exactly the same codebase as the physical storage deviceAs long as reliability & availability are met, then the hardware is no longer significant18
  • 19. Parallels with Server VirtualisationServer virtualisation was successful due to power of Intel processors & LinuxEnabled x86 work to be used for Windows and Open SystemsWindows platform almost needs 1 server per application, forcing consolidationWave 1 server virtualisation reduced costs, improved hardware utilisation – the consolidation phase.Wave 2 implemented mobility features; vMotion, Storage vMotion, HA, DRS.19
  • 20. Parallels with Server Virtualisation Virtualisation enables disparate operating systems to be supported on the same hardwareWorkload can be balanced to meet demandHardware can be added/removed non-disruptively – transparent upgradeServer virtualisation has enabled high scalability20
  • 21. Storage Virtualisation & Hardware IndependenceVSAs show closely coupled hardware/software is no longer requiredSoftware can be developed and released independentlyFeature release not dependent on hardware21
  • 22. Storage Virtualisation & Hardware IndependenceHardware can be designed to meet performance, availability & throughput, leveraging server hardware developmentBranches with smaller hardwareCore data centres with bigger arraysBoth using same features/functionality22
  • 23. Future Speculation LUN virtualisation rather than array virtualisation is the key to futureLUNs must be individually addressableAbility to move a LUN between physical infrastructuresLUN owned/managed by an arrayTransparent migration, failoverIncreased availabilityDelivers data mobility – an absolute requirement as data quantities increase (especially PB+ arrays)23
  • 24. Future Speculation New addressing schema necessaryRemove restrictions of Fibre ChannelAddress LUN independently of physical locationAllow LUN to move around infrastructureAllow LUN to be addressed through multiple locationsMore granular sub-object accessBetter load balancingBetter mobility24
  • 25. Future SpeculationVMFS are LUNs – which are binary objectsVMFS divides into VMDKs for independent accessVirtual machine becomes the object to move around the infrastructureSub-LUN access and locking enables read & write everywhere approachStorage and Virtualisation will be inextricably linked to each other25

Editor's Notes

  • #5: The first disk drive was invented by IBM in 1956. It had a capacity of only 5MB and as can be seen from the later picture, was huge, consisting of 50 magnetic disks and being housed in something the size of a large refrigerator.
  • #6: The technology quickly moved on, with smaller size, larger capacity drives. Winchester drives were introduced, so called because they were intended to have 2 30MB spindles.
  • #7: However storage still suffered from being large, expensive, had no built in recovery technology and used CKD. Each “LUN” or volume was still a physical disk.
  • #8: Revolution came with the arrival of the first 5.25 form factor drive from Seagate, out of Shugart Associates. This had the same capacity of the drive from 24 years earlier, but was 800 times smaller. Although this drive had no onboard controller, it was the shape of drives to come. Very quickly drive controls were integrated onto the drive itself (hence the term IDE, integrated drive elecronics).
  • #9: Today we have drives that are stand-alone commodity devices. They have reduced to 2.5”, become highly reliable and can transfer data in/out incredibly quickly. We’re likely to see more hybrid flash/HDD models too and SSDs are in use where performance wins over cost.
  • #10: 50 years of development has brought us from a cargo plane to an envelope – from 5MB to 32GB.
  • #11: What about the storage array? Until drives became commodity, we couldn’t deliver arrays. EMC integrated commodity drives with cache and bespoke components to create an array that worked stand-alone from the processor.
  • #12: The I/O subsystem that the mainframe previously needed to control disks was pushed down into the array. This meant new features could be created – replication, snapshots and RAID – all without host control.
  • #13: Quickly we saw rapid development in features; new participants in the market arrived. New product categories developed – Midrange, filers, deduplication devices. These devices were charged at a premium – growing these companies substantially.
  • #14: Storage became centralised with the advent of Fibre Channel and Storage Area Networks. Today we have FCoE, FC, iSCSI, NFS, CIFS as connectivity protocols – all of which are still based on SCSI and the first SAN - ESCON
  • #15: More recently vendors have been able to take advantage of increased processor power, reliable components and faster backplanes to move away from dedicated ASIC and component development. They’ve moved from manufacturers to assemblers and software developers. Only two companies haven’t shaken off this; 3PAR and Hitachi still use dedicated ASICs, but the market is too expensive for any new competitors.
  • #17: This new breed of products has seen competition from companies using commodity hardware – Compellent being the latest to fall to Dell, all of which have used commodity hardware and pushed the intelligence to hardware.
  • #18: So the hardware has moved to become standardised. Storage arrays now look like servers – common components – generic physical hardware. This commoditisation means the developments in the array aren’t tied to hardware developments – both can develop independently.
  • #19: This can easily be witnessed by the fact that there are so many VSAs – virtual appliances available. Some are fully functional, some development only. But either way, they show that the features are all now software based.
  • #20: We can see parallels here with virtualisation. Server virtualisation existed 38 years ago with the release of VM in 1972 but had a new least of life with X86 virtualisation of Windows & Linux. Wave 1 was initial adoption – the consolidation phase. Phase 2 moves the technology mainstream by improving scalability with new features.
  • #21: Server virtualisation removed the tie to the hardware. Workload exists independent of the hardware to the extend that it can be used more efficiently, with a more highly available delivery.
  • #22: VSAs for storage show hardware and software don’t need to be tightly coupled. Software can be developed independently and will be the future of storage deployments.
  • #23: So hardware can be designed to meet requirements – small for branches, clustered for high availability, Both use the same software layer – so interoperability exists.
  • #24: What about the future – where are we headed? Data needs independence from the array. It needs to be addressed as an independent object – similar to how DNS and NFS/CIFS addressing works.
  • #25: We’re too tied to WWN/IP address. We need to be able to federate access to LUNs or objects as they move around the infrastructure.
  • #26: In the future the boundaries between storage and virtualisation will continue to be blurred.