SlideShare a Scribd company logo
TIA-942: Data Center Standards (& best practices) Sri Chalasani Merit 2010 Conference May 25, 2010
Objectives What are concerns in the data center? Data center standards & best Practices
Data Center Definition Computer facility designed for continuous use by several users, and well equipped with hardware, software, peripherals,  power conditioning  and backup, communication equipment,  security systems , etc. –  businessdictionary.com … ..It generally includes  redundant or backup power supplies, redundant data communications connections,  environmental controls  (e.g., air conditioning, fire suppression) and security devices . –  wikipedia.org Power conditioning  Cooling Redundancy Security Notice the common terminology Levels of implementation set them apart Capacity Monitoring & Controls Growth
Why should we care? DCs house mission-critical data & equipment. In addition… Challenges… increased demand for: Applications / systems availability / SLA Complex & heterogeneous systems Service levels for uptime and responsiveness Amount of data (live and retention) Regulatory compliance and security Changing business demands Green practices & energy costs Need a facility to accommodate these
Data Center Standards Without standards No methodology for comparing data center for reliability and availability Variations in data center designs Three commonly known tier systems  Uptime Institute  (1995) Syska Hennessy Group ANSI/TIA-942 or TIA-942  (2005, 2008, 2010)
Data Center Standards Uptime and Syska Neither addresses the challenges Both provide a framework for the disciplines in a DC - not enough details TIA-942 Requirements / guidelines for the design & installation of a data center Multidisciplinary Design Considerations Intended Audience
TIA-942 Multidisciplinary Design  Design Considerations Architectural Design   (space, floor, light, security etc.) Structured Wiring  Electrical Cooling/Mechanical Operations Design Process Space Planning Redundancy Site Selection Architectural Structural Electrical Mechanical/Cooling Fire Protection Security Building Automation Access Providers Telecom Spaces Cabinets & Racks Cabling Pathways Cabling Systems Cabling Field Testing Telecom Administration Information Technology Commissioning Maintenance
TIA-942 – Discussion Topics For today’s discussion, focus on… Data Center Spaces. Data Center Cabling Electrical Cooling Tier System
Spaces - Functional Areas TIA-942 – 5-key functional areas: (1) Entrance Room (ER) (2) Main Distribution Area (MDA) (3) Horizontal Distribution Area (HDA) (4) Zone Distribution Area (ZDA), opt. (5) Equipment Distribution Area (EDA) Ideally separate rooms but not practical for normal organizations Does not include NOC, office space, tape library storage Source: Corning – Distribution in the data center
Spaces - Functional Areas Source: ADC’s Data Center Optical Distribution Frame: The Data Center’s Main Cross-Connect (1)  (2)  (3)  (5)  (4) ZDA
Spaces – Optional ZDA Source: Corning – Distribution in the data center Between HDA and EDA Provide modularity Facilitate MACs Top of Rack
Spaces – Reduced Topology Reduced Data Center Topology Consolidated ER/MDA/EDA Applicable to most enterprises Source: Orthronics – Standards-Based Data Center Structured Cabling System Design
Spaces – Typical Requirements Typical Data Center Requirements: Location Avoid locations that restrict expansion Redundant Access to facility Delivery of large equipment Located away from EMI sources No exterior windows  (increased heat & security risk) Provide authorized access & monitored Size – no magic formula Sized to meet the known requirements of specific equipment Include projected future as well as present requirements Ceiling Height Min. 8.5’ from finished floor to any obstruction  (sprinklers, lighting fixtures, or cameras) Cooling architecture may dictate higher ceilings Min. 18” clearance from water sprinkler heads Flooring / Walls Anti-static properties Sealed / painted to minimize dust Light color to enhance lighting Min. distribution floor loading 150 lbf/Sq-ft , Reco. 250 lbf/Sq-ft
Spaces – Typical Requirements Other Equipment UPS, power dist. or conditioner <= 100kVa inside room > 100kVa in separate room Lighting Min. 500 lux in the horizontal plane and 200 lux in the vertical plane Lighting on separate circuits/ panels Emergency lighting & signs Doors Min. 3’ wide x 7’ high no obstructions or removable center Operational parameters Dedicated HVAC system preferred  (68 – 77 F); measured every 10-30 ft at 1.5ft height HVAC – min. 100 – 400 sqft/ton Max. temp rate of change: 5 F/hr 40% to 55% relative humidity  (reduces ESD) Electrical - Signal Reference Grid (SRG)/Common Bonding Network Sprinkler systems  pre-action system or clean agent Security Camera monitoring (int./ext.) 100-yr flood plain
Spaces – Raised vs. Solid Floor Raised floor a very common notion, but... Older equipment vs. newer equipment air flow  (bottom-up vs. front to back) Hot aisle – Cold aisle:  Examine the air flow dynamics Cold air – wants to fall , but we are pushing – requires pressure through perf. tiles Equip. densities increase ->  higher head load -> higher pressure of cold air  through restrictive space What happens to hot air? – flows up, reduces temperature and begins to fall down again Only place to go is creep into cold aisle…. warmer air at cabinet tops Typically see  passive components or open spaces near top of cabinets Opening / leaks in flooring has impact on pressure Both use anti-static tiles or flooring Data & electrical cabling restrictions New build – more expensive Have to look at your environment to see if raised floor makes sense…. do use this as the rule of thumb !
Cabling Systems Structured vs. Unstructured Cabling Backbone cabling Horizontal cabling Cross-connect in the entrance room or main distribution area Main cross-connect (MC) in the main distribution area Horizontal cross-connect (HC) in the telecommunications room, horizontal distribution area or main distribution area Zone outlet or consolidation point in the zone distribution area; and  Outlet in the equipment distribution area
Cabling Systems Source: Corning Cable Systems – Just the Technical Facts
Cabling Systems – Transmission Media 100-ohm twisted-pair copper cable Category 5e or  6, 6A 10GbE: Cat 6 – 37-55mts, Cat 6A – 100mts Multimode fiber optic cable 62.5/125  µ m or 50/125  µ m 50/125  µ m 850 nm laser optimized  mmf Singlemode optical fiber cable  75-ohm coaxial cable  Type 734 & 735 cable Type T1.404 coaxial connector
Cabling Systems –Under floor / Overhead Under Floor Cabling Less expensive if raised floor than overhead Multilevel trays / paths  for fiber/copper/power Cabling in cable trays to  minimize airflow blocks  Data Cables – Hot Aisle Power Cables – Cold Aisle Provide adequate capacity for growth Electrical – color coded PDU with locking receptacle. Receptacles labeled with PDU/panel ID & breaker #
Cabling Systems – Under floor / Overhead Overhead Can be used in raised floor environments also Multi level cable tray system Bottom layer – copper Middle layer – fiber Top layer – power Suspended from ceiling;  Min.12” clearance above each ladder 5” separation from fluorescent lights & power Avoid blocking cooling ducts  (overhead cooling) and return air paths
Racks  / Cabinets Placement of Racks / Cabinets Hot aisle / Cold aisle  - arranged in an alternating pattern (with fronts facing each other)  Cold aisles are front & Hot aisles are rear of racks/cabinets –  best results use containment If there is a raised floor Cold Aisle - PDU cables; Hot Aisle - Data cable trays Common bonding network (CBN)  Racks, cable trays, HVAC, PDU, panel boards, raised floor structure, columns –  tied to common ground Rack clearance Front Min. 3ft, 4ft recommended Back Min. 3ft.
Racks  / Cabinets Source: anixter.com Still applies for overhead cooling as well
Racks  / Cabinets Placement of racks / cabinets Front rails recessed for wire management Switch-Panel-Switch arrangement Front edge of cabinet on edge of tile Perforated tiles at front of cabinets Provide blank panels in empty spaces
Electrical Considerations Unfortunately no magic bullet! Manual process for load configuration APC ‘s “Calculating Total Power Requirements for Data Centers” By Richard Sawyer – framework for calculating req.  Color coded PDU with locking receptacle. Receptacles labeled with PDU/panel ID & breaker # Best Practices Multiple power grid connects Dual A-B cording Sub-breakers per relay rack or lineup Intelligent PDU Generator capacity to include for cooling UPS capacity to include cooling and lights Accommodate growth
Cooling Considerations No specific guidelines  Basic physics Cooling reqd. = Heat generated = Electrical load # 1 Mitigating Factor in DC – heat removal Design Implications Address these factor to avoid limitations to capacity, redundancy, and efficiency Layout of racks in alternating rows ( hot/cold aisle )  Location of CRAC units Quantity and  location of vents Sizing of ductwork Proper internal configuration of racks;  airflow
Cooling Considerations Process Cooling is not enough – airflow required Determine critical heat load Establish critical loads - watts-per-RLU/Rack Determine the CFM requirements per RLU/Rack Establish a floor plan –  balance heat loads If possible, divide the room into cooling zones by RLU Determine appropriate air conditioner type(s) Equip. airflow (f->b / s->s) Determine cooling delivery methodology(s) Room, Row, Rack Blank panels/short circuits Cold air containment Special Considerations – high BTU Deploy a comprehensive  monitoring system RLU: Rack Location Unit  CFM: Cubic Feet / Min. BTU: British Thermal Unit
Cooling Considerations - Airflow source: apc.com Supply & Return Based Contained Systems for best results
Fire Detection and Suppression Significant risk of electrical fires A comprehensive fire detection & suppression system is mission-critical Water detection systems Detection Both heat and smoke detection Airflow patterns determines location of detection units Interconnect with the fire suppression system, local alarms, monitoring system, etc Install in accordance with NFPA 72E Installed below raised floors and other areas Suppression Follow NFPA 75 standard firewalls Chemical systems  or Clean Agent  ( FM 200 , Inergen, Ecaro-25(FE 25), Novec 1230) Sprinkler systems — both flooded and  pre-action (prevent accidental discharge) Manual systems (Manual pull stations, Portable fire extinguishers
Tier System – TIA-942 4-Tier System based on Uptime Based Resilience / Capacity the MEP systems 16-pages of criteria Primary Categories Sample Sub-Categories Power and cooling delivery paths  Initial & ultimate watts/sqft Support space to raised floor ratio  Raised floor height  Floor loading pounds/sqft Utility voltage Redundancy Telecommunications Architectural and structural Physical Security Electrical Mechanical MEP: Mechanical, Electrical & Plumbing (cable routing)
Optimal Criticality – Choosing a tier Balance cost of downtime and TCO source: apc.com C Business characteristics Effect on system design 1 •  Typically small businesses •  Limited online presence •  Low dependence on IT •  Perceive downtime as a tolerable Inconvenience •  Numerous single points of failure in all aspects of design •  No generator if UPS has 8 minutes of backup time •  Generally unable to sustain more than a 10 minute power outage 2 •  Some online revenue generation •  Multiple servers •  Phone system vital to business •  Dependent on email •  Some tolerance to scheduled downtime •  Some redundancy in power and cooling systems •  Generator backup •  Able to sustain 24 hour power outage •  Minimal thought to site selection •  Vapor barrier •  Formal data room separate from other areas 3 •  World-wide presence •  Majority activity from online •  VoIP phone system •  High dependence on IT •  High cost of downtime •  Highly recognized brand •  Two utility paths (active and passive) •  Redundant power and cooling systems •  Redundant service providers •  Able to sustain 72-hour power outage •  Careful site selection planning •  One-hour fire rating •  Allows for concurrent maintenance 4 •  Multi-million dollar business •  Maj. of rev from electronic transactions •  Business model entirely dependent on IT •  Extremely high cost of downtime •  Two independent utility paths •  2N power and cooling systems •  Able to sustain 96 hour power outage •  Stringent site selection criteria •  Minimum two-hour fire rating; High phy. security •  24/7 onsite maintenance staff
Tier System Source: The Uptime Institute Attribute / Statistic  Tier I  Tier II  Tier III  Tier IV Power and Cooling Delivery Paths  1 Active  1 Active  1 Active 1 Passive  2 Active  Redundant Components  N  N + 1  N + 1  2(N + 1)   Support Space to Raised Floor Ratio  20% 30% 80 – 90%  100% Initial Watts / sqft  20 – 30  40 – 50  40 – 60  50 – 80  Ultimate Watts / sqft 20 – 30  40 – 50  100 – 150  150+  Raised Floor Height  12”  18”  30 – 36”  30 – 36”  Floor Loading Pounds / sqft 85 100 150 150+  Utility Voltage  208, 480  208, 480  12 – 15 kV  12 – 15 kV  Months to Implement  3 3 – 6  15 – 20  15 – 20  Year First Deployed  1965 1970 1985 1995 Construction $ / sqft $450  $600   $900  $1,100+  Annual IT Downtime Due to Site  28.8 hrs  22.0 hrs  1.6 hrs  0.4 hrs  Site Availability  99.67% 99.75% 99.98% 100.00%
Next / Action Steps Understand TIA-942 (requirement & process) Perform a risk assessment   Identify constraints, associated risks, cost of downtime For each sub-system Determine current tier All systems need not be Tier-IV; pick & choose Work with Finance & Facilities to resolve issues If Risks are too high – look at alternatives
Outsourced Data Center Fits business model - consider outsourcing Affordable co-location/hosted Data Center and uptime (99.995%) uptime are  not mutually exclusive Understand  levels of redundancy and the uptime SLA  in order to get the best combination of uptime and affordability Balance between budget and availability
Outsourced Data Center What to look for…. Claims of Uptime Tiers – III or IV; most are not certified Hardened data center buildings Data center power & cooling redundancy Telecom entrance redundancy Availability of multiple carriers Physical security SAS 70 data center compliance SLA
Review TIA-942 – organized common sense Key design parameters for the disciplines in a data center Tier System Next Steps
Resources Useful links Excellent white papers from  www.apc.com TIA - http://www.tiaonline.org/ Green data center efficiency savings calculator http://cooling.thegreengrid.org/namerica/WEB_APP/calc_index.html The green grid (thegreengrid.org) Department of Energy – DC Profiling Tool http://www1.eere.energy.gov/industry/datacenters/software.html … . and obviously  Google  or  Bing  it.
Questions
Contact Information Sri Chalasani [email_address] 248.223.3707

More Related Content

ZIP
DataCenter:: Infrastructure Presentation
PPTX
Data center design 2
ODP
Datacenter101
PPT
Datacenter
PDF
Best Practices for Planning your Datacenter
PPTX
Modular Data Center Design
PDF
Data Center Floor Design - Your Layout Can Save of Kill Your PUE & Cooling Ef...
PPT
Simplifying Data Center Design/ Build
DataCenter:: Infrastructure Presentation
Data center design 2
Datacenter101
Datacenter
Best Practices for Planning your Datacenter
Modular Data Center Design
Data Center Floor Design - Your Layout Can Save of Kill Your PUE & Cooling Ef...
Simplifying Data Center Design/ Build

What's hot (20)

PPTX
Data Centre Design Guideline and Standards
PPTX
Data Center Cooling Design - Datacenter-serverroom
PDF
Datacenter Strategy, Design, and Build
PPTX
Datacenter best practices design and implementation
PPTX
What Does It Cost to Build a Data Center? (SlideShare)
PDF
Top 10 Data Center Success Criteria
PDF
Data center Building & General Specification
PPTX
DATA CENTER
PPTX
Data Center Tiers Explained
PPTX
Importance of data centers
PPT
Data center
PPTX
Data Center Tiers : Tier 1, Tier 2, Tier 3 and Tier 4 data center tiers expla...
PDF
Bms system basic
PPTX
Data center
PPTX
Data center hvac
PPTX
Data Center Infrastructure Management(DCIM)
PPTX
Data center presentation toi -28022013
PDF
Data Center Cooling, Critical Facility and Infrastructure Optimization
PDF
Meraki Overview
PPT
Hot Aisle & Cold Aisle Containment Solutions & Case Studies
Data Centre Design Guideline and Standards
Data Center Cooling Design - Datacenter-serverroom
Datacenter Strategy, Design, and Build
Datacenter best practices design and implementation
What Does It Cost to Build a Data Center? (SlideShare)
Top 10 Data Center Success Criteria
Data center Building & General Specification
DATA CENTER
Data Center Tiers Explained
Importance of data centers
Data center
Data Center Tiers : Tier 1, Tier 2, Tier 3 and Tier 4 data center tiers expla...
Bms system basic
Data center
Data center hvac
Data Center Infrastructure Management(DCIM)
Data center presentation toi -28022013
Data Center Cooling, Critical Facility and Infrastructure Optimization
Meraki Overview
Hot Aisle & Cold Aisle Containment Solutions & Case Studies
Ad

Similar to Tia 942 Data Center Standards (20)

PDF
TIA -942 - STANDREDS FOR DESIGN OF DATA CENTRE
PPTX
Data Center Advanced.pptx
PPT
Ever Green Dc
PPT
KJBi Development Example
PPTX
Data center m&amp;e
PPT
Connectix Commercial Overview Dc Session 8 Using The Fear Model To Design...
PPT
Cooling & Power Issues
PPTX
Bicsi Sa Adriaan Steyn V3
PDF
Commercial Overview SCS Session 1 Server Rack Strategies
PDF
Datwyler data center presentation info tech middle east
PDF
Ci Physical Infrastructure Carousel
PDF
Ppt3 london - sophia ( operation intelligence ) what is the eu code of conduct
PPTX
Unit 3_Data Center Design in storage.pptx
PDF
CERTIFIED DATA CENTRE SPECIALIST
PPTX
DC-Best Practices Design and Implementation.pptx
PPT
Genesys System - 8dec2010
PPTX
05_Overhead_or_Under_Floor_Installation.pptx
PDF
CDCS.pdf
PPT
2008 09-16-presentation
PDF
Critical design elements for high power density data centers
TIA -942 - STANDREDS FOR DESIGN OF DATA CENTRE
Data Center Advanced.pptx
Ever Green Dc
KJBi Development Example
Data center m&amp;e
Connectix Commercial Overview Dc Session 8 Using The Fear Model To Design...
Cooling & Power Issues
Bicsi Sa Adriaan Steyn V3
Commercial Overview SCS Session 1 Server Rack Strategies
Datwyler data center presentation info tech middle east
Ci Physical Infrastructure Carousel
Ppt3 london - sophia ( operation intelligence ) what is the eu code of conduct
Unit 3_Data Center Design in storage.pptx
CERTIFIED DATA CENTRE SPECIALIST
DC-Best Practices Design and Implementation.pptx
Genesys System - 8dec2010
05_Overhead_or_Under_Floor_Installation.pptx
CDCS.pdf
2008 09-16-presentation
Critical design elements for high power density data centers
Ad

Tia 942 Data Center Standards

  • 1. TIA-942: Data Center Standards (& best practices) Sri Chalasani Merit 2010 Conference May 25, 2010
  • 2. Objectives What are concerns in the data center? Data center standards & best Practices
  • 3. Data Center Definition Computer facility designed for continuous use by several users, and well equipped with hardware, software, peripherals, power conditioning and backup, communication equipment, security systems , etc. – businessdictionary.com … ..It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices . – wikipedia.org Power conditioning Cooling Redundancy Security Notice the common terminology Levels of implementation set them apart Capacity Monitoring & Controls Growth
  • 4. Why should we care? DCs house mission-critical data & equipment. In addition… Challenges… increased demand for: Applications / systems availability / SLA Complex & heterogeneous systems Service levels for uptime and responsiveness Amount of data (live and retention) Regulatory compliance and security Changing business demands Green practices & energy costs Need a facility to accommodate these
  • 5. Data Center Standards Without standards No methodology for comparing data center for reliability and availability Variations in data center designs Three commonly known tier systems Uptime Institute (1995) Syska Hennessy Group ANSI/TIA-942 or TIA-942 (2005, 2008, 2010)
  • 6. Data Center Standards Uptime and Syska Neither addresses the challenges Both provide a framework for the disciplines in a DC - not enough details TIA-942 Requirements / guidelines for the design & installation of a data center Multidisciplinary Design Considerations Intended Audience
  • 7. TIA-942 Multidisciplinary Design Design Considerations Architectural Design (space, floor, light, security etc.) Structured Wiring Electrical Cooling/Mechanical Operations Design Process Space Planning Redundancy Site Selection Architectural Structural Electrical Mechanical/Cooling Fire Protection Security Building Automation Access Providers Telecom Spaces Cabinets & Racks Cabling Pathways Cabling Systems Cabling Field Testing Telecom Administration Information Technology Commissioning Maintenance
  • 8. TIA-942 – Discussion Topics For today’s discussion, focus on… Data Center Spaces. Data Center Cabling Electrical Cooling Tier System
  • 9. Spaces - Functional Areas TIA-942 – 5-key functional areas: (1) Entrance Room (ER) (2) Main Distribution Area (MDA) (3) Horizontal Distribution Area (HDA) (4) Zone Distribution Area (ZDA), opt. (5) Equipment Distribution Area (EDA) Ideally separate rooms but not practical for normal organizations Does not include NOC, office space, tape library storage Source: Corning – Distribution in the data center
  • 10. Spaces - Functional Areas Source: ADC’s Data Center Optical Distribution Frame: The Data Center’s Main Cross-Connect (1) (2) (3) (5) (4) ZDA
  • 11. Spaces – Optional ZDA Source: Corning – Distribution in the data center Between HDA and EDA Provide modularity Facilitate MACs Top of Rack
  • 12. Spaces – Reduced Topology Reduced Data Center Topology Consolidated ER/MDA/EDA Applicable to most enterprises Source: Orthronics – Standards-Based Data Center Structured Cabling System Design
  • 13. Spaces – Typical Requirements Typical Data Center Requirements: Location Avoid locations that restrict expansion Redundant Access to facility Delivery of large equipment Located away from EMI sources No exterior windows (increased heat & security risk) Provide authorized access & monitored Size – no magic formula Sized to meet the known requirements of specific equipment Include projected future as well as present requirements Ceiling Height Min. 8.5’ from finished floor to any obstruction (sprinklers, lighting fixtures, or cameras) Cooling architecture may dictate higher ceilings Min. 18” clearance from water sprinkler heads Flooring / Walls Anti-static properties Sealed / painted to minimize dust Light color to enhance lighting Min. distribution floor loading 150 lbf/Sq-ft , Reco. 250 lbf/Sq-ft
  • 14. Spaces – Typical Requirements Other Equipment UPS, power dist. or conditioner <= 100kVa inside room > 100kVa in separate room Lighting Min. 500 lux in the horizontal plane and 200 lux in the vertical plane Lighting on separate circuits/ panels Emergency lighting & signs Doors Min. 3’ wide x 7’ high no obstructions or removable center Operational parameters Dedicated HVAC system preferred (68 – 77 F); measured every 10-30 ft at 1.5ft height HVAC – min. 100 – 400 sqft/ton Max. temp rate of change: 5 F/hr 40% to 55% relative humidity (reduces ESD) Electrical - Signal Reference Grid (SRG)/Common Bonding Network Sprinkler systems pre-action system or clean agent Security Camera monitoring (int./ext.) 100-yr flood plain
  • 15. Spaces – Raised vs. Solid Floor Raised floor a very common notion, but... Older equipment vs. newer equipment air flow (bottom-up vs. front to back) Hot aisle – Cold aisle: Examine the air flow dynamics Cold air – wants to fall , but we are pushing – requires pressure through perf. tiles Equip. densities increase -> higher head load -> higher pressure of cold air through restrictive space What happens to hot air? – flows up, reduces temperature and begins to fall down again Only place to go is creep into cold aisle…. warmer air at cabinet tops Typically see passive components or open spaces near top of cabinets Opening / leaks in flooring has impact on pressure Both use anti-static tiles or flooring Data & electrical cabling restrictions New build – more expensive Have to look at your environment to see if raised floor makes sense…. do use this as the rule of thumb !
  • 16. Cabling Systems Structured vs. Unstructured Cabling Backbone cabling Horizontal cabling Cross-connect in the entrance room or main distribution area Main cross-connect (MC) in the main distribution area Horizontal cross-connect (HC) in the telecommunications room, horizontal distribution area or main distribution area Zone outlet or consolidation point in the zone distribution area; and Outlet in the equipment distribution area
  • 17. Cabling Systems Source: Corning Cable Systems – Just the Technical Facts
  • 18. Cabling Systems – Transmission Media 100-ohm twisted-pair copper cable Category 5e or 6, 6A 10GbE: Cat 6 – 37-55mts, Cat 6A – 100mts Multimode fiber optic cable 62.5/125 µ m or 50/125 µ m 50/125 µ m 850 nm laser optimized mmf Singlemode optical fiber cable 75-ohm coaxial cable Type 734 & 735 cable Type T1.404 coaxial connector
  • 19. Cabling Systems –Under floor / Overhead Under Floor Cabling Less expensive if raised floor than overhead Multilevel trays / paths for fiber/copper/power Cabling in cable trays to minimize airflow blocks Data Cables – Hot Aisle Power Cables – Cold Aisle Provide adequate capacity for growth Electrical – color coded PDU with locking receptacle. Receptacles labeled with PDU/panel ID & breaker #
  • 20. Cabling Systems – Under floor / Overhead Overhead Can be used in raised floor environments also Multi level cable tray system Bottom layer – copper Middle layer – fiber Top layer – power Suspended from ceiling; Min.12” clearance above each ladder 5” separation from fluorescent lights & power Avoid blocking cooling ducts (overhead cooling) and return air paths
  • 21. Racks / Cabinets Placement of Racks / Cabinets Hot aisle / Cold aisle - arranged in an alternating pattern (with fronts facing each other) Cold aisles are front & Hot aisles are rear of racks/cabinets – best results use containment If there is a raised floor Cold Aisle - PDU cables; Hot Aisle - Data cable trays Common bonding network (CBN) Racks, cable trays, HVAC, PDU, panel boards, raised floor structure, columns – tied to common ground Rack clearance Front Min. 3ft, 4ft recommended Back Min. 3ft.
  • 22. Racks / Cabinets Source: anixter.com Still applies for overhead cooling as well
  • 23. Racks / Cabinets Placement of racks / cabinets Front rails recessed for wire management Switch-Panel-Switch arrangement Front edge of cabinet on edge of tile Perforated tiles at front of cabinets Provide blank panels in empty spaces
  • 24. Electrical Considerations Unfortunately no magic bullet! Manual process for load configuration APC ‘s “Calculating Total Power Requirements for Data Centers” By Richard Sawyer – framework for calculating req. Color coded PDU with locking receptacle. Receptacles labeled with PDU/panel ID & breaker # Best Practices Multiple power grid connects Dual A-B cording Sub-breakers per relay rack or lineup Intelligent PDU Generator capacity to include for cooling UPS capacity to include cooling and lights Accommodate growth
  • 25. Cooling Considerations No specific guidelines Basic physics Cooling reqd. = Heat generated = Electrical load # 1 Mitigating Factor in DC – heat removal Design Implications Address these factor to avoid limitations to capacity, redundancy, and efficiency Layout of racks in alternating rows ( hot/cold aisle ) Location of CRAC units Quantity and location of vents Sizing of ductwork Proper internal configuration of racks; airflow
  • 26. Cooling Considerations Process Cooling is not enough – airflow required Determine critical heat load Establish critical loads - watts-per-RLU/Rack Determine the CFM requirements per RLU/Rack Establish a floor plan – balance heat loads If possible, divide the room into cooling zones by RLU Determine appropriate air conditioner type(s) Equip. airflow (f->b / s->s) Determine cooling delivery methodology(s) Room, Row, Rack Blank panels/short circuits Cold air containment Special Considerations – high BTU Deploy a comprehensive monitoring system RLU: Rack Location Unit CFM: Cubic Feet / Min. BTU: British Thermal Unit
  • 27. Cooling Considerations - Airflow source: apc.com Supply & Return Based Contained Systems for best results
  • 28. Fire Detection and Suppression Significant risk of electrical fires A comprehensive fire detection & suppression system is mission-critical Water detection systems Detection Both heat and smoke detection Airflow patterns determines location of detection units Interconnect with the fire suppression system, local alarms, monitoring system, etc Install in accordance with NFPA 72E Installed below raised floors and other areas Suppression Follow NFPA 75 standard firewalls Chemical systems or Clean Agent ( FM 200 , Inergen, Ecaro-25(FE 25), Novec 1230) Sprinkler systems — both flooded and pre-action (prevent accidental discharge) Manual systems (Manual pull stations, Portable fire extinguishers
  • 29. Tier System – TIA-942 4-Tier System based on Uptime Based Resilience / Capacity the MEP systems 16-pages of criteria Primary Categories Sample Sub-Categories Power and cooling delivery paths Initial & ultimate watts/sqft Support space to raised floor ratio Raised floor height Floor loading pounds/sqft Utility voltage Redundancy Telecommunications Architectural and structural Physical Security Electrical Mechanical MEP: Mechanical, Electrical & Plumbing (cable routing)
  • 30. Optimal Criticality – Choosing a tier Balance cost of downtime and TCO source: apc.com C Business characteristics Effect on system design 1 • Typically small businesses • Limited online presence • Low dependence on IT • Perceive downtime as a tolerable Inconvenience • Numerous single points of failure in all aspects of design • No generator if UPS has 8 minutes of backup time • Generally unable to sustain more than a 10 minute power outage 2 • Some online revenue generation • Multiple servers • Phone system vital to business • Dependent on email • Some tolerance to scheduled downtime • Some redundancy in power and cooling systems • Generator backup • Able to sustain 24 hour power outage • Minimal thought to site selection • Vapor barrier • Formal data room separate from other areas 3 • World-wide presence • Majority activity from online • VoIP phone system • High dependence on IT • High cost of downtime • Highly recognized brand • Two utility paths (active and passive) • Redundant power and cooling systems • Redundant service providers • Able to sustain 72-hour power outage • Careful site selection planning • One-hour fire rating • Allows for concurrent maintenance 4 • Multi-million dollar business • Maj. of rev from electronic transactions • Business model entirely dependent on IT • Extremely high cost of downtime • Two independent utility paths • 2N power and cooling systems • Able to sustain 96 hour power outage • Stringent site selection criteria • Minimum two-hour fire rating; High phy. security • 24/7 onsite maintenance staff
  • 31. Tier System Source: The Uptime Institute Attribute / Statistic Tier I Tier II Tier III Tier IV Power and Cooling Delivery Paths 1 Active 1 Active 1 Active 1 Passive 2 Active Redundant Components N N + 1 N + 1 2(N + 1) Support Space to Raised Floor Ratio 20% 30% 80 – 90% 100% Initial Watts / sqft 20 – 30 40 – 50 40 – 60 50 – 80 Ultimate Watts / sqft 20 – 30 40 – 50 100 – 150 150+ Raised Floor Height 12” 18” 30 – 36” 30 – 36” Floor Loading Pounds / sqft 85 100 150 150+ Utility Voltage 208, 480 208, 480 12 – 15 kV 12 – 15 kV Months to Implement 3 3 – 6 15 – 20 15 – 20 Year First Deployed 1965 1970 1985 1995 Construction $ / sqft $450 $600 $900 $1,100+ Annual IT Downtime Due to Site 28.8 hrs 22.0 hrs 1.6 hrs 0.4 hrs Site Availability 99.67% 99.75% 99.98% 100.00%
  • 32. Next / Action Steps Understand TIA-942 (requirement & process) Perform a risk assessment Identify constraints, associated risks, cost of downtime For each sub-system Determine current tier All systems need not be Tier-IV; pick & choose Work with Finance & Facilities to resolve issues If Risks are too high – look at alternatives
  • 33. Outsourced Data Center Fits business model - consider outsourcing Affordable co-location/hosted Data Center and uptime (99.995%) uptime are not mutually exclusive Understand levels of redundancy and the uptime SLA in order to get the best combination of uptime and affordability Balance between budget and availability
  • 34. Outsourced Data Center What to look for…. Claims of Uptime Tiers – III or IV; most are not certified Hardened data center buildings Data center power & cooling redundancy Telecom entrance redundancy Availability of multiple carriers Physical security SAS 70 data center compliance SLA
  • 35. Review TIA-942 – organized common sense Key design parameters for the disciplines in a data center Tier System Next Steps
  • 36. Resources Useful links Excellent white papers from www.apc.com TIA - http://www.tiaonline.org/ Green data center efficiency savings calculator http://cooling.thegreengrid.org/namerica/WEB_APP/calc_index.html The green grid (thegreengrid.org) Department of Energy – DC Profiling Tool http://www1.eere.energy.gov/industry/datacenters/software.html … . and obviously Google or Bing it.
  • 38. Contact Information Sri Chalasani [email_address] 248.223.3707

Editor's Notes

  • #2: * TIA – Telecommunications Industry Association * Focus on TIA-942 data standards and some of the best practices surrounding a data center. * If you get a chance to go through this document, you notice that it is fairly simple and applies a lot of common sense; probably, at the end of this review you will say.. Hmmm I know that – the TIA puts structure to random common sense.
  • #4: * Wikipedia.org: A data center or datacenter is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices. businessdictionary.com: Computer facility designed for continuous use by several users, and well equipped with hardware, software, peripherals, power conditioning and backup, communication equipment, security systems, etc. So what make one data center different from another: Levels of redundancy (cooling, electrical, connectivity, etc.) Capacity (space, cooling, electrical, network connectivity etc.) Monitoring and notification Staffing to maintain the facility
  • #5: The increased demands on enterprise data centers stem from New business realities, Increased energy costs, Deploy and manage applications that require higher availability and increased service levels for uptime and responsiveness Regulatory compliance requirements for data retention and security Implement green computing practices, both of which reduce costs by lowering data center power consumption. Expanding volumes of data managing highly complex and wildly heterogeneous environments Q: By show of hands, how many people actually track cost of data center operations, including energy costs?
  • #6: The first attempt of providing some level of standardization was to provide a tier system – a system that specifies the availability and reliability of a data center
  • #7: Q What is the difference between Uptime and TIA-942? The Uptime Institute Established in 1995 Not a standards body Widely referenced in the data center construction industry Uptime’s method includes four tiers; Provides a high level guideline but does not provide specific design details for each Tier. TIA 942 Established by a standards body TIA and recognized by ANSI TIA 942 tier system is based on Uptime Institute’s Tier Performance Standards. Although a standard, tier system is provided as “informative and not considered to be requirements of this Standard”. Provide specific design criteria for designers build to a specific tier level and allows data center owners to evaluate their own design Comparison of the three Uptime method and Syska method , do not provide details needed to articulate the differences between levels. The TIA-942, provides specific details at every tier level and across a wide range of elements including telecom, architectural, electrical, mechanical, monitoring, and operations. Objective of standard: Provide requirements &amp; guidelines for the design &amp; installation of a data center or computer room. Standard to used in data center design / building development process Provides input during the construction process and cuts across the multidisciplinary design efforts; By addressing the multi-disciplinary aspects, promotes cooperation in the design and construction Do have to mention that it is a little heavy on the telecommunications standards than others – given its origin. Provides more details that the other two. For example, the TIA-942 specifies that a tier 2 data center should have two access provider entrance pathways that are at least 20 m (66 ft) apart. Syska also specifies that a tier 2 data center should have two entrance pathways but adds no other detail. Audience: Primarily intended for use by CIO, Data Center Operations Manager, Infrastructure Engineers (servers / network / cabling), Facilitate communications with architects, facility management
  • #8: ANSI/BICSI-002 &amp;quot;Data Center Design Standard &amp; Recommended Practices&amp;quot; These 21 areas can be boiled down to one of these 8 core areas Sizing and selection: Design Process, Space Planning, Site Selection Cabling infrastructure and administration: cabinets and racks, cabling pathways, cabling systems, cabling field testing Architectural and structural considerations: Architectural, Structural, Commissioning Security and Fire Protection: Fire protection, Security, building automation, Electrical, Grounding, and Mechanical Systems: Electrical, HVAC/Mechanical, Applications Distances: Redundancy, information technology, Maintenance Access Provider Coordination and Demarcation: access providers, telecom space, telecom administration, Operations (?): Maintenance The number one data center planning issue is - Heat Mitigation (cooling)
  • #9: The 5 areas of focus for today will be Data Center Spaces. Data Center Cabling Electrical Cooling Tier System
  • #10: According to TIA-942, a data center should include the following key functional areas: • One or more Entrance Rooms • Main Distribution Area (MDA) • One or more Horizontal Distribution Areas (HDA) • Equipment Distribution Area (EDA) • An optional Zone Distribution Area (ZDA) • Backbone and Horizontal Cabling
  • #11: Entrance Room Analogy: “Entrance Facility” Main Distribution Area (MDA) Analogy: “Equipment Room” Horizontal Distribution Area (HDA) Analogy: “Telecom Room” Zone Distribution Area (ZDA) Analogy: “Consolidation Point” Equipment Distribution Area (EDA) Analogy: “Work Area” Entrance Room (ER) : Location of interface with campus and carrier entrance facilities Location for access provider equipment, demarcation points and interface with other campus locations. ER is connected to the data center MDA through backbone cabling. TR’s Main Distribution Area (MDA) Centralized portion of the backbone cabling Providing connectivity between equipment rooms, entrance facilities, horizontal cross-connects, and intermediate cross-connects. Can have core aggregation switches / routers Horizontal Distribution Area (HDA) Main transition point between backbone and horizontal cabling and houses the LAN, SAN and KVM switches that connect to the active equipment (servers, mainframes, storage devices). Location of horizontal cross-connect (HC); HDA houses cross connects and active equipment (switches) for connecting to the equipment distribution area or Zone Distribution Area (if available) and storage area network (SAN). * Per the TIA-942 standard, both the MDA and HDA require separate racks for fiber, UTP and coax cable Zone Distribution Area (ZDA) Optional ZDA acts as a consolidation point within the horizontal cabling run between the HDA and EDA. ZDA cannot contain any cross connects or active equipment. Equipment Distribution Area (EDA) * Where equipment cabinets and racks house the switches and servers and where the horizontal cabling from the HDA (or ZDA if used) is terminated at patch panels
  • #12: Advantages of a ZDA Reduces pathway congestion Limits data center disruption from the MDA and eases implementation of MACs Enables a modular solution for a “pay-as-you-grow” approach Simple to deploy and/or redeploy if needed typically does not contain active electronics, but with Top of Rack topologies, I think it would qualify as a ZDA.
  • #14: Location: Avoid locations that are restricted by building components that limit expansion such as elevators, core, outside walls, or other fixed building walls. Accessibility for the delivery of large equipment to the equipment room should be provided EMI: Sources: electronics devices that transmit data over a medium EMI can couple itself onto data lines and corrupt data packets being transmitted on that medium. may cause corruption of the data that is being transmitted and stored. Floor Loading: Lbf / Sq ft: A pound-foot is a unit of torque (a vector). One pound-foot is the torque created by one pound force acting at a perpendicular distance of one foot from a pivot point.
  • #15: Signal Ref. Grid: The intent of the signal reference grid is to establish an equipotential ground plane where everything connected rises and falls together in the event of an electrical disturbance, from whatever source. Electronic equipment is affected when there is a potential difference between devices. An equipotential grid significantly reduces potential differences, thus reducing current flow thereby eliminating the adverse affect on logic circuits SRG not required with modern IT equipment; The advent of Ethernet and fiber data interfaces have dramatically reduced the susceptibility of IT equipment to noise and transients, particularly when compared with the ground referenced IT interface technologies of 20 years ago. The installation of an SRG is not harmful, other than the associated cost and delay. * Recommend UPS equipment outside of the main data center – 13 – 18% of heat generated from UPS
  • #16: Higher equipment failures at top of the rack In the EDA, racks and cabinets should be arranged in a hot aisle/cold aisle configuration to encourage airflow and reduce heat
  • #17: Unstructured Cabling or ad-hoc cabling : Installing cabling when you need it – primarily serves as a single use cable Structured Cabling : an organized reusable and flexible cabling system Very large emphasis on Cabling in the document Multidisciplinary Design Considerations in cabling Horizontal cabling Backbone cabling Cross-connect in the entrance room or main distribution area Main cross-connect (MC) in the main distribution area Horizontal cross-connect (HC) in the telecommunications room, horizontal distribution area or main distribution area Zone outlet or consolidation point in the zone distribution area; and Outlet in the equipment distribution area. Backbone Cabling: Provides connections between telecommunications closets, equipment rooms, and entrance facilities. Includes cabling from MDA to ER, HDA, TR; Optional cabling between HDAs allowed Consists of the transmission media (optical fiber cable), Further be classified as interbuilding backbone (cabling between buildings), or intrabuilding backbone (cabling within a building). Horizontal cabling: Simply put – patch panel to wall outlet; Connect between a horizontal cross connect to the outlet in the EDA or ZDA Max of one consolidation point in a ZDA Max distance of 90m/295ft reduced where total patch cord lengths &gt; 10m
  • #19: Bold are the recommendations per TIA-942
  • #22: • Racks and Cabinets –Single Rack loaded with Blade Servers –30kW power • 10-30 seconds required for backup generators to start results in overheated electronics” • Industry experts Recommend Maximum 15-20kW per rack allows backup generators to start up without overheated electronics” Common Bonding Network: The set of metallic components that are intentionally or incidentally interconnected to provide the principal means for effecting bonding and grounding inside a telecommunications building. These components include: structural steel or reinforcing rods, metallic plumbing, ac power conduit, cable racks, and bonding conductors. The CBN is connected to the exterior grounding electrode system
  • #25: Can use nameplate specifications @ approximately 60 – 75% Multiple physically separate connections to public power grid substations Intelligent PDUs are able to provide management systems information about power consumption at the rack or even device level; provide remote power cycling Dual A-B cording: In-rack PDUs should make multiple circuits available so that redundant power supplies (designated A and B) for devices can be corded to separate circuits. Some A-B cording strategies call for both circuits to be on UPS while others call for one power supply to be on house power while the other is on UPS. Each is a function of resilience and availability.
  • #26: Design Implications The vast majority of existing data centers designs do not correctly address the above factors and suffer from unexpected capacity limitations, inadequate redundancy, and poor efficiency.
  • #27: The vast majority of existing data centers designs do not correctly address the above factors and suffer from unexpected capacity limitations, inadequate redundancy, and poor efficiency. • It takes about 160cfm for 1kW of heat or 2500 cfm for 18kW of heat • An average perforated floor tile will disperse 250-300 cfm •“ Equipment on the upper 2/3 of the rack fail twice as often as equipment on the bottom 1/3 of the rack” 1. Determine the Critical Load and Heat Load Equipment + other loads such as lighting, people, etc Can use nameplate specifications @ approximately 60 – 75% As a very general rule-of-thumb, consider no less than 1-ton (12,000 BTU/Hr / 3,516 watts) per 400 square-feet of IT equipment floor space Factor in x% for growth 2. Determine Power Requirements on a per RLU Basis Rack or cabinet foot print area since all manufacturers produce cabinets of generally the same size Rack location is the specific location on the data center floor where services that can accommodate power, cooling, physical space, network connectivity, functional capacity, and rack weight requirements are delivered. Services delivered to the rack location are specified in units of measure, such as watts or Btus, thus forming the term rack location unit The reality is that a computer room usually deploys a mix of varying RLU power densities throughout its overall area. Help with site layout also - Knowing the RLUs for power and cooling enable the data center manager to adjust the physical design, the power and cooling equipment, and rack configurations within the facility to meet the systems&apos; requirements. 3. Determine CFM – movement of air Effective cooling is accomplished by providing both the proper temperature and an adequate quantity of air to the load General problems – improper positioning of equipment heat exhaust being the intake for another equipment, solid doors on cabinets, Be aware of equipment airflow – some use side to side, most use front to back Blank Unused Rack Positions * Large data center would require Computational Fluid Dynamic (CFD) Modeling • It takes about 160cfm for 1kW of heat or 2500 cfm for 18kW of heat • An average perforated floor tile will disperse 250-300 cfm •“ Equipment on the upper 2/3 of the rack fail twice as often as equipment on the bottom 1/3 of the rack”
  • #29: Because of the significant risk of electrical fires in a data center, installing a comprehensive fire detection and suppression system is mission-critical for protecting life and property, as well as ensuring quick operational recovery Fire Suppression: Halon 1301 (no longer recommended or in production)
  • #30: 8 primary criteria MEP: Mechanical , Electrical, and Plumbing What is initial watts / square foot What is ultimate watts / square foot
  • #31: Not All Data Centers Have to be Tier-4 Choosing an optimal criticality is a balance between a business’s cost of downtime and a data center’s total cost of ownership Choices may be limited depending on whether a new data center is being built, or changes are being made to an existing one Existing data center projects (i.e. retrofit), choosing a criticality is limited to the constraints of the existing structure Identify the major constraints to see if they can be addressed or acceptable risk to the business. If the constraint could be removed view alternate strategies such as alternate location
  • #32: http://www.webopedia.com/TERM/d/data_center_tiers.htm A four tier system that provides a simple and effective means for identifying different data center site infrastructure design topologies. The Uptime Institute&apos;s tiered classification system is an industry standard approach to site infrastructure functionality addresses common benchmarking standard needs. The four tiers, as classified by The Uptime Institute include the following: Tier 1 : composed of a single path for power and cooling distribution, without redundant components, providing 99.671% availability. Tier II : composed of a single path for power and cooling distribution, with redundant components, providing 99.741% availability Tier III : composed of multiple active power and cooling distribution paths, but only one path active, has redundant components, and is concurrently maintainable, providing 99.982% availability Tier IV : composed of multiple active power and cooling distribution paths, has redundant components, and is fault tolerant, providing 99.995% availability.
  • #36: As stated earlier, some of this may not be any ‘earth shattering’ information, but this standard functions as a collection point for a lot of the common sense type of activities related to the data center. Not difficult to understand – but implementation can be fairly complex