SlideShare a Scribd company logo
Front cover


IBM TotalStorage
Productivity Center V2.3:
Getting Started
Effectively use the IBM TotalStorage
Productivity Center

Learn to install and customize the IBM
TotalStorage Productivity Center

Understand the IBM TotalStorage
Open Software Family




                                                            Mary Lovelace
                                                         Larry Mc Gimsey
                                                             Ivo Gomilsek
                                                       Mary Anne Marquez




ibm.com/redbooks
Ibm total storage productivity center v2.3 getting started sg246490
International Technical Support Organization

IBM TotalStorage Productivity Center V2.3:
Getting Started

December 2005




                                               SG24-6490-01
Note: Before using this information and the product it supports, read the information in “Notices” on
 page xiii.




Second Edition (December 2005)

This edition applies to Version 2, Release 3 of IBM TotalStorage Productivity Center (product number
5608-UC1, 5608-UC3, 5608-UC4, 5608-UC5.




© Copyright International Business Machines Corporation 2005. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents

                   Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
                   Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv

                   Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
                   The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
                   Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
                   Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

Part 1. IBM TotalStorage Productivity Center foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

                   Chapter 1. IBM TotalStorage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . 3
                   1.1 Introduction to IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
                      1.1.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
                   1.2 IBM TotalStorage Open Software family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
                   1.3 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
                      1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data . . . . . . . . . 7
                      1.3.2 Fabric subject matter expert: Productivity Center for Fabric . . . . . . . . . . . . . . . . . . 9
                      1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk . . . . . . . . . 12
                      1.3.4 Replication subject matter expert: Productivity Center for Replication . . . . . . . . . 14
                   1.4 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
                      1.4.1 Productivity Center for Disk and Productivity Center for Replication . . . . . . . . . . 17
                      1.4.2 Event services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
                   1.5 Taking steps toward an On Demand environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

                   Chapter 2. Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               27
                   2.1 IBM TotalStorage Productivity Center architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . .                           28
                      2.1.1 Architectural overview diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   28
                      2.1.2 Architectural layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           29
                      2.1.3 Relationships between the managers and components . . . . . . . . . . . . . . . . . . . .                                      31
                      2.1.4 Collecting data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          32
                   2.2 Standards used in IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . .                                34
                      2.2.1 ANSI standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           34
                      2.2.2 Web-Based Enterprise Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                            34
                      2.2.3 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                          35
                      2.2.4 Simple Network Management Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                            36
                      2.2.5 Fibre Alliance MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           37
                   2.3 Service Location Protocol (SLP) overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        38
                      2.3.1 SLP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           38
                      2.3.2 Common Information Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     47
                   2.4 Component interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             49
                      2.4.1 CIMOM discovery with SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     49
                      2.4.2 How CIM Agent works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 50
                   2.5 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  51
                      2.5.1 Tivoli Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               53
                      2.5.2 Common Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             53

Part 2. Installing the IBM TotalStorage Productivity Center base product suite . . . . . . . . . . . . . . . . . 55

                   Chapter 3. Installation planning and considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . 57


© Copyright IBM Corp. 2005. All rights reserved.                                                                                                            iii
3.1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   58
               3.2 Installation prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        58
                  3.2.1 TCP/IP ports used by TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . .                                59
                  3.2.2 Default databases created during the installation . . . . . . . . . . . . . . . . . . . . . . . . .                          62
               3.3 Our lab setup environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             62
               3.4 Pre-installation check list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        64
               3.5 User IDs and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         65
                  3.5.1 User IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     65
                  3.5.2 Increasing user security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             68
                  3.5.3 Certificates and key files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           69
                  3.5.4 Services and service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  69
               3.6 Starting and stopping the managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  70
               3.7 Windows Management Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        70
               3.8 World Wide Web Publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               73
               3.9 Uninstalling Internet Information Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   73
               3.10 Installing SNMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       73
               3.11 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         75
                  3.11.1 The computer name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               75
                  3.11.2 Database considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 75
                  3.11.3 Windows Terminal Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   75
                  3.11.4 Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        76
                  3.11.5 Personal firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         77
                  3.11.6 Changing the HOSTS file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 77
               3.12 IBM TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        78
                  3.12.1 Server recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  78
                  3.12.2 Supported subsystems and databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                          78
                  3.12.3 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             79
                  3.12.4 Creating the DB2 database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  81

               Chapter 4. Installing the IBM TotalStorage Productivity Center suite . . . . . . . . . . . . . 83
               4.1 Installing the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
                  4.1.1 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
               4.2 Prerequisite Software Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
                  4.2.1 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
                  4.2.2 Installing prerequisite software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
               4.3 Suite installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
                  4.3.1 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
                  4.3.2 Installing the TotalStorage Productivity Center suite . . . . . . . . . . . . . . . . . . . . . 110
                  4.3.3 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . 125
                  4.3.4 IBM TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . 140
                  4.3.5 IBM TotalStorage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . 146
                  4.3.6 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . 157
                  4.3.7 IBM TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . 171

               Chapter 5. CIMOM install and configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                          191
               5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   192
               5.2 Planning considerations for Service Location Protocol . . . . . . . . . . . . . . . . . . . . . . . .                            192
                  5.2.1 Considerations for using SLP DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      192
                  5.2.2 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        193
               5.3 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 194
               5.4 Planning considerations for CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                    194
                  5.4.1 CIMOM configuration recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                            195
               5.5 Installing CIM agent for ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             196


iv   IBM TotalStorage Productivity Center V2.3: Getting Started
5.5.1 ESS CLI Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      196
                     5.5.2 DS CIM Agent install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         202
                     5.5.3 Post Installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        211
                  5.6 Configuring the DS CIM Agent for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      212
                     5.6.1 Registering DS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             212
                     5.6.2 Registering ESS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              213
                     5.6.3 Register ESS server for Copy services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     214
                     5.6.4 Restart the CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          215
                     5.6.5 CIMOM user authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                215
                  5.7 Verifying connection to the ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             216
                     5.7.1 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            219
                     5.7.2 Confirming the ESS CIMOM is available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        220
                     5.7.3 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . .                               221
                     5.7.4 Configuring TotalStorage Productivity Center for SLP discovery . . . . . . . . . . . .                                   223
                     5.7.5 Registering the DS CIM Agent to SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      224
                     5.7.6 Verifying and managing CIMOM’s availability. . . . . . . . . . . . . . . . . . . . . . . . . . .                         224
                  5.8 Installing CIM agent for IBM DS4000 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                    225
                     5.8.1 Verifying and Managing CIMOM availability . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        233
                  5.9 Configuring CIMOM for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         234
                     5.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account. . . . . .                                         235
                     5.9.2 Registering the SAN Volume Controller host in SLP . . . . . . . . . . . . . . . . . . . . .                              241
                  5.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary . . . . . .                                          241
                     5.10.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              242
                     5.10.2 Persistency of SLP registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               243
                     5.10.3 Configuring slp.reg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          243

Part 3. Configuring the IBM TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245

                  Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk. . . . . . . . . . .                                         247
                  6.1 Productivity Center for Disk Discovery summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        248
                  6.2 SLP DA definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     248
                     6.2.1 Verifying and managing CIMOMs availability . . . . . . . . . . . . . . . . . . . . . . . . . . .                         256
                  6.3 Disk and Replication Manager remote GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      259
                     6.3.1 Installing Remote Console for Performance Manager function. . . . . . . . . . . . . .                                    270
                     6.3.2 Launching Remote Console for TotalStorage Productivity Center . . . . . . . . . . .                                      277

                  Chapter 7. Configuring TotalStorage Productivity Center for Replication . . . . . . . . 279
                  7.1 Installing a remote GUI and CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280

                  Chapter 8. Configuring IBM TotalStorage Productivity Center for Data . . . . . . . . . .                                          289
                  8.1 Configuring the CIM Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            290
                     8.1.1 CIM and SLP interfaces within Data Manager . . . . . . . . . . . . . . . . . . . . . . . . . .                           290
                     8.1.2 Configuring CIM Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             290
                     8.1.3 Setting up a disk alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        293
                  8.2 Setting up the Web GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          295
                     8.2.1 Using IBM HTTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              295
                     8.2.2 Using Internet Information Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                299
                     8.2.3 Configuring the URL in Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      303
                  8.3 Installing the Data Manager remote console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     304
                  8.4 Configuring Data Manager for Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     313
                  8.5 Alert Disposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   316

                  Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric . . . . . . . . . 319
                  9.1 TotalStorage Productivity Center component interaction . . . . . . . . . . . . . . . . . . . . . . 320


                                                                                                                                     Contents         v
9.1.1 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . .                                      320
                      9.1.2 SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   320
                      9.1.3 Tivoli Provisioning Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              320
                   9.2 Post-installation procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          321
                      9.2.1 Installing Productivity Center for Fabric – Agent . . . . . . . . . . . . . . . . . . . . . . . . .                      321
                      9.2.2 Installing Productivity Center for Fabric – Remote Console . . . . . . . . . . . . . . . .                               331
                   9.3 Configuring IBM TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . .                             342
                      9.3.1 Configuring SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           342
                      9.3.2 Configuring the outband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 346
                      9.3.3 Checking inband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             348
                      9.3.4 Performing an initial poll and setting up the poll interval . . . . . . . . . . . . . . . . . . .                        349

                   Chapter 10. Deployment of agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  351
                   10.1 Installing the agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      352
                   10.2 Data Agent installation using the installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                354
                   10.3 Deploying the agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        361

Part 4. Using the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

                   Chapter 11. Using TotalStorage Productivity Center for Disk. . . . . . . . . . . . . . . . . . .                                  375
                   11.1 Productivity Center common base: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        376
                   11.2 Launching TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      376
                   11.3 Exploiting Productivity Center common base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       377
                      11.3.1 Launch Device Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               378
                   11.4 Performing volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              378
                   11.5 Changing the display name of a storage device . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        382
                   11.6 Working with ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       383
                      11.6.1 ESS Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              384
                      11.6.2 Assigning and unassigning ESS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . .                           385
                      11.6.3 Creating new ESS volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  387
                      11.6.4 Launch device manager for an ESS device . . . . . . . . . . . . . . . . . . . . . . . . . . .                           388
                   11.7 Working with DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          389
                      11.7.1 DS8000 Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 390
                      11.7.2 Assigning and unassigning DS8000 Volumes . . . . . . . . . . . . . . . . . . . . . . . . .                              392
                      11.7.3 Creating new DS8000 volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                    393
                      11.7.4 Launch device manager for an DS8000 device . . . . . . . . . . . . . . . . . . . . . . . .                              394
                   11.8 Working with SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   396
                      11.8.1 Working with SAN Volume Controller MDisks. . . . . . . . . . . . . . . . . . . . . . . . . .                            396
                      11.8.2 Creating new MDisks on supported storage devices . . . . . . . . . . . . . . . . . . . .                                399
                      11.8.3 Create and view SAN Volume Controller VDisks . . . . . . . . . . . . . . . . . . . . . . .                              402
                   11.9 Working with DS4000 family or FAStT storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        406
                      11.9.1 Working with DS4000 or FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                          407
                      11.9.2 Creating DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       409
                      11.9.3 Assigning hosts to DS4000 and FAStT Volumes . . . . . . . . . . . . . . . . . . . . . . .                               413
                      11.9.4 Unassigning hosts from DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . .                                 414
                      11.9.5 Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         415
                   11.10 Event Action Plan Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           416
                      11.10.1 Applying an Event Action Plan to a managed system or group . . . . . . . . . . .                                       421
                      11.10.2 Exporting and importing Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . .                         423

                   Chapter 12. Using TotalStorage Productivity Center Performance Manager . . . . . .                                                427
                   12.1 Exploiting Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 428
                      12.1.1 Performance Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 429
                      12.1.2 Performance Manager data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       429


vi    IBM TotalStorage Productivity Center V2.3: Getting Started
12.1.3 Using IBM Director Scheduler function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        435
   12.1.4 Reviewing data collection task status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      437
   12.1.5 Managing Performance Manager Database . . . . . . . . . . . . . . . . . . . . . . . . . . .                                439
   12.1.6 Performance Manager gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       443
   12.1.7 ESS thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           457
   12.1.8 Data collection for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . .                          460
   12.1.9 SAN Volume Controller thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       461
   12.1.10 Data collection for the DS6000 and DS8000 . . . . . . . . . . . . . . . . . . . . . . . . .                               463
   12.1.11 DS6000 and DS8000 thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                          466
12.2 Exploiting gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         467
   12.2.1 Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           468
   12.2.2 Creating gauges: an example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                    468
   12.2.3 Zooming in on the specific time period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       471
   12.2.4 Modify gauge to view array level metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         471
   12.2.5 Modify gauge to review multiple metrics in same chart. . . . . . . . . . . . . . . . . . .                                 474
12.3 Performance Manager command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . .                              475
   12.3.1 Performance Manager CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                             475
   12.3.2 Sample command outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     477
12.4 Volume Performance Advisor (VPA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      478
   12.4.1 VPA introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           478
   12.4.2 The provisioning challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 478
   12.4.3 Workload characterization and workload profiles . . . . . . . . . . . . . . . . . . . . . . .                              479
   12.4.4 Workload profile values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               479
   12.4.5 How the Volume Performance Advisor makes decisions . . . . . . . . . . . . . . . . .                                       480
   12.4.6 Enabling the Trace Logging for Director GUI Interface . . . . . . . . . . . . . . . . . . .                                481
   12.4.7 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          482
   12.4.8 Creating and managing workload profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . .                           508

Chapter 13. Using TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . .                                      521
13.1 TotalStorage Productivity Center for Data overview . . . . . . . . . . . . . . . . . . . . . . . . .                            522
   13.1.1 Business purpose of TotalStorage Productivity Center for Data. . . . . . . . . . . .                                       522
   13.1.2 Components of TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . .                                    522
   13.1.3 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              523
13.2 Functions of TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . .                              523
   13.2.1 Basic menu displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              524
   13.2.2 Discover and monitor Agents, disks, filesystems, and databases . . . . . . . . . .                                         526
   13.2.3 Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        529
   13.2.4 Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   532
   13.2.5 Chargeback: Charging for storage usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                           533
13.3 OS Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       533
   13.3.1 Navigation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          534
   13.3.2 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       535
   13.3.3 Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        540
   13.3.4 Pings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    542
   13.3.5 Probes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      545
   13.3.6 Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     547
   13.3.7 Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      552
13.4 OS Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   555
   13.4.1 Alerting navigation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             558
   13.4.2 Computer Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            560
   13.4.3 Filesystem Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           562
   13.4.4 Directory Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         563
   13.4.5 Alert logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     564


                                                                                                                    Contents          vii
13.5 Policy management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           565
                   13.5.1 Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      565
                   13.5.2 Network Appliance Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  570
                   13.5.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       570
                   13.5.4 Filesystem extension and LUN provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . .                           576
                   13.5.5 Scheduled Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             582
                13.6 Database monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          583
                   13.6.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      584
                   13.6.2 Probes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     585
                   13.6.3 Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    586
                   13.6.4 Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     587
                13.7 Database Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       588
                   13.7.1 Instance Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         588
                   13.7.2 Database-Tablespace Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   588
                   13.7.3 Table Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      589
                   13.7.4 Alert log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   589
                13.8 Databases policy management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  589
                   13.8.1 Network Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            590
                   13.8.2 Instance Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          591
                   13.8.3 Database Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            591
                13.9 Database administration samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  591
                   13.9.1 Database up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         591
                   13.9.2 Database utilization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           591
                   13.9.3 Need for reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               591
                13.10 Data Manager reporting capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   592
                   13.10.1 Major reporting categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 593
                13.11 Using the standard reporting functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                    594
                   13.11.1 Asset Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            595
                   13.11.2 Storage Subsystems Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       604
                   13.11.3 Availability Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             604
                   13.11.4 Capacity Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             605
                   13.11.5 Usage Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            607
                   13.11.6 Usage Violation Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  610
                   13.11.7 Backup Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             627
                13.12 TotalStorage Productivity Center for Data ESS Reporting . . . . . . . . . . . . . . . . . . .                                 634
                   13.12.1 ESS Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            634
                13.13 IBM Tivoli Storage Resource Manager top 10 reports . . . . . . . . . . . . . . . . . . . . . .                                653
                   13.13.1 ESS used and free storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  653
                   13.13.2 ESS attached hosts report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  656
                   13.13.3 Computer Uptime Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                    657
                   13.13.4 Growth in storage used and number of files . . . . . . . . . . . . . . . . . . . . . . . . . .                           659
                   13.13.5 Incremental backup trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  661
                   13.13.6 Database reports against DBMS size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         665
                   13.13.7 Database instance storage report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     667
                   13.13.8 Database reports size by instance and by computer . . . . . . . . . . . . . . . . . . .                                  667
                   13.13.9 Locate the LUN on which a database is allocated . . . . . . . . . . . . . . . . . . . . .                                669
                   13.13.10 Finding important files on your systems . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         672
                13.14 Creating customized reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               683
                   13.14.1 System Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             683
                   13.14.2 Reports owned by a specific username . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                           686
                   13.14.3 Batch Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          688
                13.15 Setting up a schedule for daily reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   697
                13.16 Setting up a reports Web site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               698


viii   IBM TotalStorage Productivity Center V2.3: Getting Started
13.17 Charging for storage usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700

Chapter 14. Using TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . .                                      703
14.1 NetView navigation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 704
   14.1.1 NetView interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            704
   14.1.2 Maps and submaps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                704
   14.1.3 NetView window structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  704
   14.1.4 NetView Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             705
   14.1.5 NetView Navigation Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  707
   14.1.6 Object selection and NetView properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                          707
   14.1.7 Object symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           709
   14.1.8 Object status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        709
   14.1.9 Status propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             711
   14.1.10 NetView and Productivity Center for Fabric integration . . . . . . . . . . . . . . . . .                                  711
14.2 Walk-through of Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        712
   14.2.1 Device Centric view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             713
   14.2.2 Host Centric view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            714
   14.2.3 SAN view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         714
   14.2.4 Launching element managers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      723
   14.2.5 Explore view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         725
14.3 Topology views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        725
   14.3.1 SAN view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         727
   14.3.2 Device Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              731
   14.3.3 Host Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            732
   14.3.4 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           733
   14.3.5 MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               734
14.4 SAN menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            735
   14.4.1 SAN Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           735
14.5 Application launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        739
   14.5.1 Native support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         740
   14.5.2 NetView support for Web interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       740
   14.5.3 Launching TotalStorage Productivity Center for Data. . . . . . . . . . . . . . . . . . . .                                 742
   14.5.4 Other menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               742
14.6 Status cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     743
14.7 Practical cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       745
   14.7.1 Cisco MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   745
   14.7.2 Removing a connection on a device running an inband agent . . . . . . . . . . . . .                                        747
   14.7.3 Removing a connection on a device not running an agent . . . . . . . . . . . . . . . .                                     750
   14.7.4 Powering off a switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              752
   14.7.5 Running discovery on a RNID-compatible device. . . . . . . . . . . . . . . . . . . . . . .                                 756
   14.7.6 Outband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              758
   14.7.7 Inband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             760
   14.7.8 Disk devices discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               762
   14.7.9 Well placed agent strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 764
14.8 Netview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   766
   14.8.1 Reporting overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             767
   14.8.2 SNMP and MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              767
14.9 NetView setup and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   769
   14.9.1 Advanced Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              769
   14.9.2 Copy Brocade MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                770
   14.9.3 Loading MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           771
14.10 Historical reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         774
   14.10.1 Creating a Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  775


                                                                                                                     Contents         ix
14.10.2 Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              783
                  14.10.3 Troubleshooting the Data Collection daemon . . . . . . . . . . . . . . . . . . . . . . . . .                          784
                  14.10.4 NetView Graph Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           784
               14.11 Real-time reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      786
                  14.11.1 MIB Tool Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        787
                  14.11.2 Displaying real-time data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            791
                  14.11.3 SmartSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     794
                  14.11.4 SmartSets and Data Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  802
                  14.11.5 Seed file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   805
               14.12 Productivity Center for Fabric and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   810
               14.13 What is iSCSI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     811
               14.14 How does iSCSI work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           811
               14.15 Productivity Center for Fabric and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   812
                  14.15.1 Functional description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          813
                  14.15.2 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        813
               14.16 ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            814
                  14.16.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    814
                  14.16.2 Error processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        816
                  14.16.3 Configuration for ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . .                         818
                  14.16.4 Using ED/FI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      820
                  14.16.5 Searching for the faulted device on the topology map . . . . . . . . . . . . . . . . . .                              822
                  14.16.6 Removing notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            825

               Chapter 15. Using TotalStorage Productivity Center for Replication. . . . . . . . . . . . .                                      827
               15.1 TotalStorage Productivity Center for Replication overview . . . . . . . . . . . . . . . . . . . .                           828
                  15.1.1 Supported Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              828
                  15.1.2 Replication session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        830
                  15.1.3 Storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      831
                  15.1.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      831
                  15.1.5 Relationship of group, pool, and session . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     832
                  15.1.6 Copyset and sequence concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                    833
               15.2 Exploiting Productivity Center for replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 834
                  15.2.1 Before you start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      834
                  15.2.2 Adding a replication device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            834
                  15.2.3 Creating a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             838
                  15.2.4 Modifying a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              841
                  15.2.5 Viewing storage group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 842
                  15.2.6 Deleting a storage group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            843
                  15.2.7 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           844
                  15.2.8 Modifying a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           847
                  15.2.9 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          848
                  15.2.10 Viewing storage pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 849
                  15.2.11 Creating storage paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            850
                  15.2.12 Point-in-Time Copy - creating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     852
                  15.2.13 Creating a session - verifying source-target relationship . . . . . . . . . . . . . . . .                             856
                  15.2.14 Continuous Synchronous Remote Copy - creating a session. . . . . . . . . . . . .                                      861
                  15.2.15 Managing a Point-in-Time copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   866
                  15.2.16 Managing a Continuous Synchronous Remote Copy . . . . . . . . . . . . . . . . . . .                                   873
               15.3 Using Command Line Interface (CLI) for replication . . . . . . . . . . . . . . . . . . . . . . . . .                        884
                  15.3.1 Session details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      886
                  15.3.2 Starting a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       888
                  15.3.3 Suspending a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           892
                  15.3.4 Terminating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          893


x   IBM TotalStorage Productivity Center V2.3: Getting Started
Chapter 16. Hints, tips, and good-to-knows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     899
16.1 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  900
   16.1.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             901
16.2 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              901
   16.2.1 Locations of configured user IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 901
   16.2.2 Resource Manager registration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 902
   16.2.3 Tivoli Agent Manager status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              902
   16.2.4 Registered Fabric Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             904
   16.2.5 Registered Data Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             906
16.3 Launchpad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   906
   16.3.1 Launchpad installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           907
   16.3.2 Launchpad customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              909
16.4 Remote consoles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       911
16.5 Verifying whether a port is in use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            911
16.6 Manually removing old CIMOM entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   911
16.7 Collecting logs for support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        917
   16.7.1 IBM Director logfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        917
   16.7.2 Using Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             921
   16.7.3 Following Discovery using Windows raswatch utility . . . . . . . . . . . . . . . . . . . .                             921
   16.7.4 DB2 database checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              922
   16.7.5 IBM WebSphere tracing and logfile browsing . . . . . . . . . . . . . . . . . . . . . . . . . .                         927
16.8 SLP and CIM Agent problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       928
   16.8.1 Enabling SLP tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           929
   16.8.2 Device registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        930
16.9 Replication Manager problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       930
   16.9.1 Diagnosing an indications problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  931
   16.9.2 Restarting the replication environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   931
16.10 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        931
   16.10.1 Enabling WebSphere Application Server trace . . . . . . . . . . . . . . . . . . . . . . . .                           932
16.11 ESS user authentication problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                940
16.12 SVC Data collection task failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             940

Chapter 17. Database management and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . .                              943
17.1 DB2 database overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           944
17.2 Database purging in TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . .                          944
   17.2.1 Performance manager database panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        945
17.3 IBM DB2 tool suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      948
   17.3.1 Command Line Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             948
   17.3.2 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          950
   17.3.3 General Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               950
   17.3.4 Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       951
17.4 DB2 Command Center overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  952
   17.4.1 Command Center navigation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        952
17.5 DB2 Command Center custom report example . . . . . . . . . . . . . . . . . . . . . . . . . . . .                            956
   17.5.1 Extracting LUN data report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             956
   17.5.2 Command Center report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              959
17.6 Exporting collected performance data to a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                    976
   17.6.1 Control Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      976
   17.6.2 Data extraction tools, tips and reporting methods. . . . . . . . . . . . . . . . . . . . . . .                         979
17.7 Database backup and recovery overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     984
17.8 Backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      988

Appendix A. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
User IDs and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992

                                                                                                                 Contents         xi
Server information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       992
                   User IDs and passwords for key files and installation. . . . . . . . . . . . . . . . . . . . . . . . . .                          993
                Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           994
                   IBM TotalStorage Enterprise Storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        994
                   IBM FAStT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     995
                   IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               996

                Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         997
                IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     997
                Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     997
                Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     997
                How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            998
                Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    998

                Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999




xii   IBM TotalStorage Productivity Center V2.3: Getting Started
Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are
inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and
distribute these sample programs in any form without payment to IBM for the purposes of developing, using,
marketing, or distributing application programs conforming to IBM's application programming interfaces.




© Copyright IBM Corp. 2005. All rights reserved.                                                             xiii
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
  AIX®                                  iSeries™                             Sequent®
  Cloudscape™                           MVS™                                 ThinkPad®
  DB2®                                  Netfinity®                           Tivoli Enterprise™
  DB2 Universal Database™               NetView®                             Tivoli Enterprise Console®
  e-business on demand™                 OS/390®                              Tivoli®
  Enterprise Storage Server®            Predictive Failure Analysis®         TotalStorage®
  Eserver®                              pSeries®                             WebSphere®
  Eserver®                              QMF™                                 xSeries®
  FlashCopy®                            Redbooks™                            z/OS®
  IBM®                                  Redbooks (logo)      ™               zSeries®
  ibm.com®                              S/390®                               1-2-3®

The following terms are trademarks of other companies:

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.

Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other
countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, and service names may be trademarks or service marks of others.




xiv    IBM TotalStorage Productivity Center V2.3: Getting Started
Preface

                 IBM® TotalStorage® Productivity Center is a suite of infrastructure management software
                 that can centralize, automate, and simplify the management of complex and heterogeneous
                 storage environments. It can help reduce the effort of managing complex storage
                 infrastructures, improve storage capacity utilization, and improve administration efficiency.
                 IBM TotalStorage Productivity Center allows you to respond to on demand storage needs and
                 brings together, in a single point, the management of storage devices, fabric, and data.

                 This IBM Redbook is intended for administrators and users who are installing and using IBM
                 TotalStorage Productivity Center V2.3. It provides an overview of the product components
                 and functions. We describe the hardware and software environment required, provide a
                 step-by-step installation procedure, and offer customization and usage hints and tips.

                 This book is not a replacement for the existing IBM Redbooks™, or product manuals, that
                 detail the implementation and configuration of the individual products that make up the IBM
                 TotalStorage Productivity Center, or the products as they may have been called in previous
                 versions. We refer to those books as appropriate throughout this book.



The team that wrote this redbook
                 This redbook was produced by a team of specialists from around the world working at the
                 International Technical Support Organization (ITSO), San Jose Center.

                 Mary Lovelace is a Consulting IT Specialist at the ITSO in San Jose, California. She has
                 more than 20 years of experience with IBM in large systems, storage and Storage Networking
                 product education, system engineering and consultancy, and systems support.

                 Larry Mc Gimsey is a consulting IT Architect working in Managed Storage Services delivery
                 supporting worldwide SAN storage customers. He has over 30 years experience in IT. He
                 joined IBM 6 years ago as a result of an outsourcing engagement. Most of his experience
                 prior to joining IBM was in mainframe systems support. It included system programming,
                 performance management, capacity planning, system automation and storage management.
                 Since joining IBM, Larry has been working with large SAN environments. He currently works
                 with Managed Storage Services offering and delivery teams to define the architecture used to
                 deliver worldwide storage services.

                 Ivo Gomilsek is an IT Specialist for IBM Global Services, Slovenia, supporting the Central
                 and Eastern European Region in architecting, deploying, and supporting SAN/storage/DR
                 solutions. His areas of expertise include SAN, storage, HA systems, xSeries® servers,
                 network operating systems (Linux, MS Windows, OS/2®), and Lotus® Domino™ servers. He
                 holds several certifications from various vendors (IBM, Red Hat, Microsoft). Ivo has
                 contributed to various other redbooks on Tivoli products, SAN, Linux/390, xSeries, and Linux.

                 Mary Anne Marquez is the team lead for tape performance at IBM Tucson. She has
                 extensive knowledge in setting up a TotalStorage Productivity Center environment for use
                 with Copy Services and Performance Management, as well as debugging the various
                 components of TotalStorage Productivity Center including WebSphere, ICAT, and the CCW
                 interface for ESS. In addition to TPC, Mary Anne has experience with the native Copy
                 Services tools on ESS model-800 and DS8000. She has authored several performance
                 white papers.

© Copyright IBM Corp. 2005. All rights reserved.                                                            xv
Thanks to the following people for their contributions to this project:

               Sangam Racherla
               Yvonne Lyon
               ITSO, San Jose Center

               Bob Haimowitz
               ITSO, Raleigh Center

               Diana Duan
               Tina Dunton
               Nancy Hobbs
               Paul Lee
               Thiha Than
               Miki Walter
               IBM San Jose

               Martine Wedlake
               IBM Beaverton

               Ryan Darris
               IBM Tucson

               Doug Dunham
               Tivoli Storage SWAT Team

               Mike Griese
               Technical Support Marketing Lead, Rochester

               Curtis Neal
               Scott Venuti
               Open System Demo Center, San Jose




xvi   IBM TotalStorage Productivity Center V2.3: Getting Started
Become a published author
        Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with
        specific products or solutions, while getting hands-on experience with leading-edge
        technologies. You'll team with IBM technical professionals, Business Partners and/or
        customers.

        Your efforts will help increase product acceptance and customer satisfaction. As a bonus,
        you'll develop a network of contacts in IBM development labs, and increase your productivity
        and marketability.

        Find out more about the residency program, browse the residency index, and apply online at:
              ibm.com/redbooks/residencies.html



Comments welcome
        Your comments are important to us!

        We want our Redbooks to be as helpful as possible. Send us your comments about this or
        other Redbooks in one of the following ways:
           Use the online Contact us review redbook form found at:
              ibm.com/redbooks
           Send your comments in an email to:
              redbook@us.ibm.com
           Mail your comments to:
              IBM Corporation, International Technical Support Organization
              Dept. QXXE Building 80-E2
              650 Harry Road
              San Jose, California 95120-6099




                                                                                      Preface    xvii
xviii   IBM TotalStorage Productivity Center V2.3: Getting Started
Part 1


Part       1     IBM TotalStorage
                 Productivity Center
                 foundation
                 In this part of the book we introduce the IBM TotalStorage Productivity Center:
                     Chapter 1, “IBM TotalStorage Productivity Center overview” on page 3, contains an
                     overview of the components of IBM TotalStorage Productivity Center.
                     Chapter 2, “Key concepts” on page 27, provides information about the communication,
                     protocols, and standards organization that is the foundation of understanding the IBM
                     TotalStorage Productivity Center.




© Copyright IBM Corp. 2005. All rights reserved.                                                             1
2   IBM TotalStorage Productivity Center V2.3: Getting Started
1


    Chapter 1.   IBM TotalStorage Productivity
                 Center overview
                 IBM TotalStorage Productivity Center is software, part of the IBM TotalStorage open software
                 family, designed to provide a single point of control for managing both IBM and non-IBM
                 networked storage devices that implement the Storage Management Initiative Specification
                 (SMI-S), including the IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage
                 Enterprise Storage Server (ESS), IBM TotalStorage Fibre Array Storage Technology (FAStT),
                 IBM TotalStorage DS4000, IBM TotalStorage DS6000, and IBM TotalStorage DS8000 series.

                 TotalStorage Productivity Center is a solution for customers with storage management
                 requirements, who want to reduce the complexities and costs of storage management,
                 including management of SAN-based storage, while consolidating control within a consistent
                 graphical user interface.

                 This chapter provides an overview of the entire IBM TotalStorage Open Software Family.




© Copyright IBM Corp. 2005. All rights reserved.                                                            3
1.1 Introduction to IBM TotalStorage Productivity Center
               The IBM TotalStorage Productivity Center consists of software components which enable
               storage administrators to monitor, configure, and manage storage devices and subsystems
               within a SAN environment.

               The TotalStorage Productivity Center is based on the recent standard issued by the Storage
               Networking Industry Association (SNIA). The standard addresses the interoperability of
               storage hardware and software within a SAN.


1.1.1 Standards organizations and standards
               Today, there are at least 10 organizations involved in creating standards for storage, storage
               management, SAN management, and interoperability. Figure 1-1 shows the key
               organizations involved in developing and promoting standards relating to storage, storage
               management, and SAN management, and the relevant standards for which they are
               responsible.




               Figure 1-1 SAN management standards bodies

               Key standards for Storage Management are:
                   Distributed Management Task Force (DMTF) Common Information Model (CIM)
                   Standards. This includes the CIM Device Model for Storage, which at the time of writing
                   was Version 2.7.2 for the CIM schema.
                   Storage Networking Industry Association (SNIA) Storage Management Initiative
                   Specification (SMI-S).




4   IBM TotalStorage Productivity Center V2.3: Getting Started
1.2 IBM TotalStorage Open Software family
        The IBM TotalStorage Open Software Family, is designed to provide a full range of
        capabilities, including storage infrastructure management, Hierarchical Storage Management
        (HSM), archive management, and recovery management.

        The On Demand storage environment is shown in Figure 1-2. The hardware infrastructure is
        a complete range of IBM storage hardware and devices providing flexibility in choice of
        service quality and cost structure. On top of the hardware infrastructure is the virtualization
        layer. The storage virtualization is infrastructure software designed to pool storage assets,
        enabling optimized use of storage assets across the enterprise and the ability to modify the
        storage infrastructure with minimal or no disruption to application services. The next layer is
        composed of storage infrastructure management to help enterprises understand and
        proactively manage their storage infrastructure in the on demand world; hierarchical storage
        management to help control growth; archive management to manage cost of storing huge
        quantities of data; recovery management to ensure recoverability of data. The top layer is
        storage orchestration which automates work flows to help eliminate human error.




        Figure 1-2 Enabling customer to move toward On Demand




                                             Chapter 1. IBM TotalStorage Productivity Center overview   5
Previously we discussed the next steps or entry points into an On Demand environment. The
               IBM software products which represent these entry points and which comprise the IBM
               TotalStorage Open Software Family is shown in Figure 1-3.




               Figure 1-3 IBM TotalStorage Open Software Family



1.3 IBM TotalStorage Productivity Center
               The IBM TotalStorage Productivity Center is an open storage infrastructure management
               solution designed to help reduce the effort of managing complex storage infrastructures, to
               help improve storage capacity utilization, and to help improve administrative efficiency. It is
               designed to enable an agile storage infrastructure that can respond to On Demand storage
               needs.

               The IBM TotalStorage Productivity Center offering is a powerful set of tools designed to help
               simplify the management of complex storage network environments. The IBM TotalStorage
               Productivity Center consists of TotalStorage Productivity Center for Disk, TotalStorage
               Productivity Center for Replication, TotalStorage Productivity Center for Data (formerly Tivoli
               Storage Resource Manager), and TotalStorage Productivity Center for Fabric (formerly Tivoli
               SAN Manager).




6   IBM TotalStorage Productivity Center V2.3: Getting Started
Taking a closer look at storage infrastructure management (see Figure 1-4), we focus on four
           subject matter experts to empower the storage administrators to effectively do their work.
              Data subject matter expert
              San Fabric subject matter expert
              Disk subject matter expert
              Replication subject matter expert




           Figure 1-4 Centralized, automated storage infrastructure management


1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data
           The Data subject matter expert has intimate knowledge of how storage is used, for example
           whether the data is used by a file system or a database application. Figure 1-5 on page 8
           shows the role of the Data subject matter expert which is filled by the TotalStorage
           Productivity Center for Data (formerly the IBM Tivoli Storage Resource Manager).




                                                  Chapter 1. IBM TotalStorage Productivity Center overview   7
Figure 1-5 Monitor and Configure the Storage Infrastructure Data area

               Heterogeneous storage infrastructures, driven by growth in file and database data, consume
               increasing amounts of administrative time, as well as actual hardware resources. IT
               managers need ways to make their administrators more efficient and more efficiently utilize
               their storage resources. Tivoli Storage Resource Manager gives storage administrators the
               automated tools they need to manage their storage resources more cost-effectively.

               TotalStorage Productivity Center for Data allows you to identify different classes of data,
               report how much space is being consumed by these different classes, and take appropriate
               actions to keep the data under control.

               Features of the TotalStorage Productivity Center for Data are:
                   Automated identification of the storage resources in an infrastructure and analysis of how
                   effectively those resources are being used.
                   File-system and file-level evaluation uncovers categories of files that, if deleted or
                   archived, can potentially represent significant reductions in the amount of data that must
                   be stored, backed up and managed.
                   Automated control through policies that are customizable with actions that can include
                   centralized alerting, distributed responsibility and fully automated response.
                   Predict future growth and future at-risk conditions with historical information.

               Through monitoring and reporting, TotalStorage Productivity Center for Data helps the
               storage administrator prevent outages in the storage infrastructure. Armed with timely
               information, the storage administrator can take action to keep storage and data available to
               the application. TotalStorage Productivity Center for Data also helps to make the most
               efficient use of storage budgets, by allowing administrators to use their existing storage more
               efficiently, and more accurately predict future storage growth.



8   IBM TotalStorage Productivity Center V2.3: Getting Started
TotalStorage Productivity Center for Data monitors storage assets, capacity, and usage
           across an enterprise. TotalStorage Productivity Center for Data can look at:
              Storage from a host perspective: Manage all the host-attached storage, capacity and
              consumption attributed to file systems, users, directories, and files
              Storage from an application perspective: Monitor and manage the storage activity inside
              different database entities including instance, tablespace, and table
              Storage utilization and provide chargeback information.

           Architecture
           The TotalStorage Productivity Center for Data server system manages a number of Agents,
           which can be servers with storage attached, NAS systems, or database application servers.
           Information is collected from the Agents and stored in a database repository. The stored
           information can then be displayed from a native GUI client or browser interface anywhere in
           the network. The GUI or browser interface gives access to the other functions of TotalStorage
           Productivity Center for Data, including creating and customizing of a large number of different
           types of reports and setting up alerts.

           With TotalStorage Productivity Center for Data, you can:
              Monitor virtually any host
              Monitor local, SAN-attached and Network Attached Storage from a browser anywhere on
              the network

           For more information refer to the redbook IBM Tivoli Storage Resource Manager: A Practical
           Introduction, SG24-6886.


1.3.2 Fabric subject matter expert: Productivity Center for Fabric
           The storage infrastructure management for Fabric covers the Storage Area Network (SAN).
           To handle and manage SAN events you need a comprehensive tool. The tool must have a
           single point of operation and it tool must be able to perform all the tasks from the SAN. This
           role is filled by the TotalStorage Productivity Center for Fabric (formerly the IBM Tivoli SAN
           Manager) which is a part of the IBM TotalStorage Productivity Center.

           The Fabric subject matter expert is the expert in the SAN. Its role is:
              Discovery of fabric information
              Provide the ability to specify fabric policies
              – What HBAs to use for each host and for what purpose
              – Objectives for zone configuration (for example, shielding host HBAs from one another
                and performance)
              Automatically modify the zone configuration

           TotalStorage Productivity Center for Fabric provides real-time visual monitoring of SANs,
           including heterogeneous switch support, and is a central point of control for SAN
           configuration (including zoning). It automates the management of heterogeneous storage
           area networks, resulting in”
              Improved Application Availability
              – Predicting storage network failures before they happen enabling preventative
                maintenance
              – Accelerate problem isolation when failures do happen



                                                  Chapter 1. IBM TotalStorage Productivity Center overview   9
Optimized Storage Resource Utilization by reporting on storage network performance
                  Enhanced Storage Personnel Productivity - Tivoli SAN Manager creates a single point of
                  control, administration and security for the management of heterogeneous storage
                  networks

               Figure 1-6 describes the requirements that must be addressed by the Fabric subject matter
               expert.




               Figure 1-6 Monitor and Configure the Storage Infrastructure Fabric area

               TotalStorage Productivity Center for Fabric monitors and manages switches and hubs,
               storage and servers in a Storage Area Network. TotalStorage Productivity Center for Fabric
               can be used for both online monitoring and historical reporting. TotalStorage Productivity
               Center for Fabric:
                  Manages fabric devices (switches) through outband management.
                  Discovers many details about a monitored server and its local storage through an Agent
                  loaded onto a SAN-attached host (Managed Host).
                  Monitors the network and collects events and traps
                  Launches vendor-provided specific SAN element management applications from the
                  TotalStorage Productivity Center for Fabric Console.
                  Discovers and manages iSCSI devices.
                  Provides a fault isolation engine for SAN problem determination (ED/FI - SAN Error
                  Predictor)

               TotalStorage Productivity Center for Fabric is compliant with the standards relevant to SAN
               storage and management.




10   IBM TotalStorage Productivity Center V2.3: Getting Started
TotalStorage Productivity Center for Fabric components
The major components of the TotalStorage Productivity Center for Fabric include:
   A manager or server, running on a SAN managing server
   Agents, running on one or more managed hosts
   Management console, which is by default on the Manager system, plus optional additional
   remote consoles
   Outband agents - consisting of vendor-supplied MIBs for SNMP

There are two additional components which are not included in the TotalStorage Productivity
Center.
   IBM Tivoli Enterprise™ Console (TEC) which is used to receive TotalStorage Productivity
   Center for Fabric generated events. Once forwarded to TEC, These can then be
   consolidated with events from other applications and acted on according to enterprise
   policy.
   IBM Tivoli Enterprise Data Warehouse (TEDW) is used to collect and analyze data
   gathered by the TotalStorage Productivity Center for Fabric. The Tivoli Data Enterprise
   Warehouse collects, organizes, and makes data available for the purpose of analysis in
   order to give management the ability to access and analyze information about its
   business.

The TotalStorage Productivity Center for Fabric functions are distributed across the Manager
and the Agent.

TotalStorage Productivity Center for FabricServer
   Performs initial discovery of environment:
   – Gathers and correlates data from agents on managed hosts
   – Gathers data from SNMP (outband) agents
   – Graphically displays SAN topology and attributes
   Provides customized monitoring and reporting through NetView®
   Reacts to operational events by changing its display
   (Optionally) forwards events to Tivoli Enterprise Console® or SNMP managers

TotalStorage Productivity Center for Fabric Agent
Gathers information about:
   SANs by querying switches and devices for attribute and topology information
   Host-level storage, such as file systems and LUNs
   Event and other information detected by HBAs
   Forwards topology and event information to the Manager

Discover SAN components and devices
TotalStorage Productivity Center for Fabric uses two methods to discover information about
the SAN - outband discovery, and inband discovery.

Outband discovery is the process of discovering SAN information, including topology and
device data, without using the Fibre Channel data paths. Outband discovery uses SNMP
queries, invoked over IP network. Outband management and discovery is normally used to
manage devices such as switches and hubs which support SNMP.




                                   Chapter 1. IBM TotalStorage Productivity Center overview   11
In outband discovery, all communications occur over the IP network:
                  TotalStorage Productivity Center for Fabric requests information over the IP network from
                  a switch using SNMP queries on the device.
                  The device returns the information toTotalStorage Productivity Center for Fabric, also over
                  the IP network.

               Inband discovery is the process of discovering information about the SAN, including
               topology and attribute data, through the Fibre Channel data paths. In inband discovery, both
               the IP and Fibre Channel networks are used:
                  TotalStorage Productivity Center for Fabric requests information (via the IP network) from
                  a Tivoli SAN Manager agent installed on a Managed Host.
                  That agent requests information over the Fibre Channel network from fabric elements and
                  end points in the Fibre Channel network.
                  The agent returns the information to TotalStorage Productivity Center for Fabric over the
                  IP network.
                  TotalStorage Productivity Center for Fabric collects, co-relates and displays information
                  from all devices in the storage network, using both the IP network and the Fibre Channel
                  network. If the Fibre Channel network is unavailable for any reason, monitoring can still
                  continue over the IP network.

               TotalStorage Productivity Center for Fabric benefits
               TotalStorage Productivity Center for Fabric discovers the SAN infrastructure, and monitors
               the status of all the discovered components. Through Tivoli NetView, the administrator can
               provide reports on faults on components (either individually or in groups, or “smartsets”, of
               components). This will help them increase data availability for applications so the company
               can either be more efficient, or maximize the opportunity to produce revenue.

               TotalStorage Productivity Center for Fabric helps the storage administrator:
                  Prevent faults in the SAN infrastructure through reporting and proactive maintenance, and
                  Identify and resolve problems in the storage infrastructure quickly, when a problem
                  Supported devices for TotalStorage Productivity Center for Fabric
                  Provide fault isolation of SAN links.

               For more information about the TotalStorage Productivity Center for Fabric, refer to IBM Tivoli
               Storage Area Network Manager: A Practical Introduction, SG24-6848.


1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk
               The Disk subject matter expert’s job allows you to manage the disk systems. It will discover
               and classify all disk systems that exist and draw a picture of all discovered disk systems. The
               Disk subject matter expert provides the ability to monitor, configure, create disks and do LUN
               masking of disks. It also does performance trending and performance threshold I/O analysis
               for both real disks and virtual disks. It also does automated status and problem alerts via
               SNMP. This role is filled by the TotalStorage Productivity Center for Disk (formerly the IBM
               TotalStorage Multiple Device Manager Performance Manager component).

               The requirements addressed by the Disk subject matter expert are shown in Figure 1-7 on
               page 13. The disk systems monitoring and configuration needs must be covered by a
               comprehensive management tool like the TotalStorage Productivity Center for Disk.




12   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 1-7 Monitor and configure the Storage Infrastructure Disk area

The TotalStorage Productivity Center for Disk provides the raw capabilities of initiating and
scheduling performance data collection on the supported devices, of storing the received
performance statistics into database tables for later use, and of analyzing the stored data and
generating reports for various metrics of the monitored devices. In conjunction with data
collection, the TotalStorage Productivity Center for Disk is responsible for managing and
monitoring the performance of the supported storage devices. This includes the ability to
configure performance thresholds for the devices based on performance metrics, the
generation of alerts when these thresholds are exceeded, the collection and maintenance of
historical performance data, and the creation of gauges, or performance reports, for the
various metrics to display the collected historical data to the end user. The TotalStorage
Productivity Center for Disk enables you to perform sophisticated performance analysis for
the supported storage devices.

Functions
TotalStorage Productivity Center for Disk provides the following functions:
   Collect data from devices
   The Productivity Center for Disk collects data from the IBM TotalStorage Enterprise
   Storage Server (ESS), SAN Volume Controller (SVC), DS400 family and SMI-S enabled
   devices. Each Performance Collector collects performance data from one or more storage
   groups, all of the same device type (for example, ESS or SAN Volume Controller). Each
   Performance Collection has a start time, a stop time, and a sampling frequency. The
   performance sample data is stored in DB2® database tables.
   Configure performance thresholds
   You can use the Productivity Center for Disk to set performance thresholds for each device
   type. Setting thresholds for certain criteria enables Productivity Center for Disk to notify
   you when a certain threshold has been exceeded, so that you to take action before a
   critical event occurs.


                                      Chapter 1. IBM TotalStorage Productivity Center overview   13
You can specify what action should be taken when a threshold-exceeded condition occurs.
                  The action may be to log the occurrence or to trigger an event. The threshold settings can
                  vary by individual device.
                  Monitor performance metrics across storage subsystems from a single console
                  Receive timely alerts to enable event action based on customer policies
                  View performance data from the Productivity Center for Disk database
                  You can view performance data from the Productivity Center for Disk database in both
                  graphical and tabular forms.
                  The Productivity Center for Disk allows a TotalStorage Productivity Center user to access
                  recent performance data in terms of a series of values of one or more metrics, associated
                  with a finite set of components per device. Only recent performance data is available for
                  gauges. Data that has been purged from the database cannot be viewed. You can define
                  one or more gauges by selecting certain gauge properties and saving them for later
                  referral. Each gauge is identified through a user-specified name, and once defined, a
                  gauge can be “started”, which means it is then displayed in a separate window of the
                  TotalStorage Productivity Center GUI. You can have multiple gauges active at the same
                  time.
                  Gauge definition will be accomplished through a wizard, to aid in entering a valid set of
                  gauge properties. Gauges are saved in the Productivity Center for Disk database and
                  retrieved upon request. When you request data pertaining to a defined gauge, the
                  Performance Manager builds a query to the database, retrieves and formats the data and
                  returns it to you. Once started, a gauge is displayed in its own window, and displays all
                  available performance data for the specified initial date/time range. The date/time range
                  can be changed after the initial gauge widow is displayed.
                  Focus on storage optimization through identification of best LUN
                  The Volume Performance Advisor is an automated tool to help the storage administrator
                  pick the best possible placement of a new LUN to be allocated, that is, the best placement
                  from a performance perspective. It also uses the historical performance statistics collected
                  from the supported devices, to locate unused storage capacity on the SAN that exhibits
                  the best (estimated) performance characteristics. Allocation optimization involves several
                  variables which are user controlled, such as required performance level and the time of
                  day/week/month of prevalent access. This function is fully integrated with the Device
                  Manager function, this is so that when a new LUN is added, for example, to the ESS, the
                  Performance Manager can seamlessly select the best possible LUN.

               For detailed information about how to use the functions of the TotalStorage Productivity
               Center for Disk refer to Chapter 11, “Using TotalStorage Productivity Center for Disk” on
               page 375.


1.3.4 Replication subject matter expert: Productivity Center for Replication
               The Replication subject matter expert’s job is to provide a single point of control for all
               replication activities. This role is filled by the TotalStorage Productivity Center for Replication.
               Given a set of source volumes to be replicated, the Productivity Center for Replication will find
               the appropriate targets, perform all the configuration actions required, and ensure the source
               and target volumes relationships are set up. Given a set of source volumes that represent an
               application, the Productivity Center for Replication will group these in a consistency group,
               give that consistency group a name, and allow you to start replication on the application.




14   IBM TotalStorage Productivity Center V2.3: Getting Started
Productivity Center for Replication will start up all replication pairs and monitor them to
completion. If any of the replication pairs fail, meaning the application is out of sync, the
Productivity Center for Replication will suspend them until the problem is resolved, resync
them and resume the replication. The Productivity Center for Replication provides complete
management of the replication process.

The requirements addressed by the Replication subject matter expert are shown Figure 1-8.
Replication in a complex environment needs to be addressed by a comprehensive
management tool like the TotalStorage Productivity Center for Replication.




Figure 1-8 Monitor and Configure the Storage Infrastructure Replication area


Functions
Data replication is the core function required for data protection and disaster recovery. It
provides advanced copy services functions for supported storage subsystems on the SAN.

Replication Manager administers and configures the copy services functions and monitors
the replication actions. Its capabilities consist of the management of two types of copy
services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and
the Point-in-Time Copy (also known as FlashCopy®). At this time TotalStorage Productivity
Center for Replication supports the IBM TotalStorage ESS.

Productivity Center for Replication includes support for replica sessions, which ensures that
data on multiple related heterogeneous volumes is kept consistent, provided that the
underlying hardware supports the necessary primitive operations. Productivity Center for
Replication also supports the session concept, such that multiple pairs are handled as a
consistent unit, and that Freeze-and-Go functions can be performed when errors in mirroring
occur. Productivity Center for Replication is designed to control and monitor the copy services
operations in large-scale customer environments.




                                      Chapter 1. IBM TotalStorage Productivity Center overview   15
Productivity Center for Replication provides a user interface for creating, maintaining, and
               using volume groups and for scheduling copy tasks. The User Interface populates lists of
               volumes using the Device Manager interface. Some of the tasks you can perform with
               Productivity Center for Replication are:
                  Create a replication group. A replication group is a collection of volumes grouped together
                  so that they can be managed concurrently.
                  Set up a Group for replication.
                  Create, save, and name a replication task.
                  Schedule a replication session with the user interface:
                  –   Create Session Wizard.
                  –   Select Source Group.
                  –   Select Copy Type.
                  –   Select Target Pool.
                  –   Save Session.
                  Start a replication session

               A user can also perform these tasks with the Productivity Center for Replication
               command-line interface.

               For more information about the Productivity Center for Replication functions refer to
               Chapter 15, “Using TotalStorage Productivity Center for Replication” on page 827.



1.4 IBM TotalStorage Productivity Center
               All the subject matter experts, for Data, Fabric, Disk, and Replication are components of the
               IBM TotalStorage Productivity Center.

               The IBM TotalStorage Productivity Center is the first offering to be delivered as part of the
               IBM TotalStorage Open Software Family. The IBM TotalStorage Productivity Center is an
               open storage infrastructure management solution designed to help reduce the effort of
               managing complex storage infrastructures, to help improve storage capacity utilization, and to
               help improve administrative efficiency. It is designed to enable an agile storage infrastructure
               that can respond to on demand storage needs.

               The IBM TotalStorage Productivity Center allows you to manage your storage infrastructure
               using existing storage management products — Productivity Center for Data, Productivity
               Center for Fabric, Productivity Center for Disk and Productivity Center for Replication — from
               one physical place.

               The IBM TotalStorage Productivity Center components can be launched from the IBM
               TotalStorage Productivity Center launch pad as shown in Figure 1-9 on page 17.




16   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 1-9 IBM TotalStorage Productivity Center Launch Pad

           The IBM TotalStorage Productivity Center establishes the foundation for IBM’s e-business On
           Demand technology. We need the function in an On Demand environment to provide IT
           resources On Demand - when the resources are needed by an application to support the
           customers business process. Of course, we are able to provide resources or remove
           resources today but the question is how. The process is expensive and time consuming.

           The IBM TotalStorage Productivity Center is the basis for the provisioning of storage
           resources to make the e-business On Demand environment a reality. In the future there will
           be more automation required to handle the hugh amount work in the provisioning area, more
           automation like the BM TotalStorage Productivity Center launch pad provides. Automation
           means workflow. Workflow is the key to getting work automated. IBM has a long history and
           investment in building workflow engines and work flows. Today IBM is using the IBM Tivoli
           Intelligent Orchestrator and IBM Tivoli Provisioning Manager to satisfy the resource requests
           in the e-business on demand™ environment in the server arena.

           The IBM Tivoli Intelligent Orchestrator and The IBM Tivoli Provisioning Manager provide the
           provisioning in the e-business On Demand environment.


1.4.1 Productivity Center for Disk and Productivity Center for Replication
           The Productivity Center for Disk and Productivity Center for Replication is software that has
           been designed to enable administrators to manage SANs and storage from a single console
           (Figure 1-10 on page 18). This software solution is designed specifically for managing
           networked storage components based on the SMI-S, including:
              IIBM TotalStorage SAN Volume Controller
              IBM TotalStorage Enterprise Storage Server (ESS)
              IBM TotalStorage Fibre Array Storage Technology (FAStT)
              IBM TotalStorage DS4000 series
              SMI enabled device




                                               Chapter 1. IBM TotalStorage Productivity Center overview   17
Figure 1-10 Managing multiple devices

               Productivity Center for Disk and Productivity Center for Replication are built on IBM Director,
               a comprehensive server management solution. Using Director with the multiple device
               management solution enables administrators to consolidate the administration of IBM storage
               subsystems and provide advanced storage management functions (including replication and
               performance management) across multiple IBM storage subsystems. It interoperates with
               SAN Management and Enterprise System Resource Manager (ESRM) products from IBM,
               includingTotalStorage Productivity Center for Data and SAN Management products from
               other vendors.

               In a SAN environment, multiple devices work together to create a storage solution. The
               Productivity Center for Disk and Productivity Center for Replication provides integrated
               administration, optimization, and replication features for interacting SAN devices, including
               the SAN Volume Controller and DS4000 Family devices. It provides an integrated view of the
               underlying system so that administrators can drill down through the virtualized layers to easily
               perform complex configuration tasks and more productively manage the SAN infrastructure.

               Because the virtualization layers support advanced replication configurations, the Productivity
               Center for Disk and Productivity Center for Replication products offer features that simplify
               the configuration, monitoring, and control of disaster recovery and data migration solutions. In
               addition, specialized performance data collection, analysis, and optimization features are
               provided. As the SNIA standards mature, the Productivity Center view will be expanded to
               include CIM-enabled devices from other vendors, in addition to IBM storage. Figure 1-11 on
               page 19 provides an overview of Productivity Center for Disk and Productivity Center for
               Replication.




18   IBM TotalStorage Productivity Center V2.3: Getting Started
IBM TotalStorage Productivity Center


                Performance                           Replication
                 Manager                               Manager



                                  Device Manager

                                        IBM Director

                       WebSphere Application Server            DB2




Figure 1-11 Productivity Center overview

The Productivity Center for Disk and Productivity Center for Replication provides support for
configuration, tuning, and replication of the virtualized SAN. As with the individual devices,
the Productivity Center for Disk and Productivity Center for Replication layers are open and
can be accessed via a GUI, CLI, or standards-based Web Services. Productivity Center for
Disk and Productivity Center for Replication provide the following functions:
   Device Manager - Common function provided when you install the base prerequisite
   products for either Productivity Center for Disk or Productivity Center for Replication
   Performance Manager - provided by Productivity Center for Disk
   Replication Manager - provided by Productivity Center for Replication

Device Manager
The Device Manager is responsible for the discovery of supported devices; collecting asset,
configuration, and availability data from the supported devices; and providing a limited
topography view of the storage usage relationships between those devices.

The Device Manager builds on the IBM Director discovery infrastructure. Discovery of storage
devices adheres to the SNIA SMI-S specification standards. Device Manager uses the
Service Level Protocol (SLP) to discover SMI-S enabled devices. The Device Manager
creates managed objects to represent these discovered devices. The discovered managed
objects are displayed as individual icons in the Group Contents pane of the IBM Director
Console as shown in Figure 1-12 on page 20.




                                        Chapter 1. IBM TotalStorage Productivity Center overview   19
Figure 1-12 IBM Director Console

               Device Manager provides a subset of configuration functions for the managed devices,
               primarily LUN allocation and assignment. Its function includes certain cross-device
               configuration, as well as the ability to show and traverse inter-device relationships. These
               services communicate with the CIM Agents that are associated with the particular devices to
               perform the required configuration. Devices that are not SMI-S compliant are not supported.
               The Device Manager also interacts and provides some SAN management functionality when
               IBM Tivoli SAN Manager is installed.

               The Device Manager health monitoring keeps you aware of hardware status changes in the
               discovered storage devices. You can drill down to the status of the hardware device, if
               applicable. This enables you to understand which components of a device are malfunctioning
               and causing an error status for the device.

               SAN Management
               When a supported SAN Manager is installed and configured, the Device Manager leverages
               the SAN Manager to provide enhanced function. Along with basic device configuration
               functions such as LUN creation, allocation, assignment, and deletion for single and multiple
               devices, basic SAN management functions such as LUN discovery, allocation, and zoning are
               provided in one step. IBM TotalStorage Productivity Center for Fabric (formerly IBM Tivoli
               SAN Manager) is currently the supported SAN Manager.

               The set of SAN Manager functions that will be exploited are:
                  The ability to retrieve the SAN topology information, including switches, hosts, ports, and
                  storage devices
                  The ability to retrieve and to modify the zoning configuration on the SAN
                  The ability to register for event notification, to ensure that Productivity Center for Disk is
                  aware when the topology or zoning changes as new devices are discovered by the SAN
                  Manager, and when hosts' LUN configurations change



20   IBM TotalStorage Productivity Center V2.3: Getting Started
Performance Manager function
The Performance Manager function provides the raw capabilities of initiating and scheduling
performance data collection on the supported devices, of storing the received performance
statistics into database tables for later use, and of analyzing the stored data and generating
reports for various metrics of the monitored devices. In conjunction with data collection, the
Performance Manager is responsible for managing and monitoring the performance of the
supported storage devices. This includes the ability to configure performance thresholds for
the devices based on performance metrics, the generation of alerts when these thresholds
are exceeded, the collection and maintenance of historical performance data, and the
creation of gauges, or performance reports, for the various metrics to display the collected
historical data to the end user. The Performance Manager enables you to perform
sophisticated performance analysis for the supported storage devices.

Functions
   Collect data from devices
   The Performance Manager collects data from the IBM TotalStorage Enterprise Storage
   Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage DS4000
   series, IBM TotalStorage DS6000 and IBM TotalStorage DS8000 series and SMI-S
   enabled devices. The performance collection task collects performance data from one or
   more storage groups, all of the same device type (for example, ESS or SVC). Each
   performance collection task has a start time, a stop time, and a sampling frequency. The
   performance sample data is stored in DB2 database tables.
   Configure performance thresholds
   You can use the Performance Manager to set performance thresholds for each device
   type. Setting thresholds for certain criteria enables Performance Manager to notify you
   when a certain threshold has been exceeded, so that you can take action before a critical
   event occurs.
   You can specify what action should be taken when a threshold-exceeded condition occurs.
   The action may be to log the occurrence or to trigger an event. The threshold settings can
   vary by individual device.

The eligible metrics for threshold checking are fixed for each storage device. If the threshold
metrics are modified by the user, the modifications are accepted immediately and applied to
checking being performed by active performance collection tasks. Examples of threshold
metrics include:
   Disk utilization value
   Average cache hold time
   Percent of sequential I/Os
   I/O rate
   NVS full value
   Virtual disk I/O rate
   Managed disk I/O rate

There is a user interface that supports threshold settings, enabling a user to:
   Modify a threshold property for a set of devices of like type.
   Modify a threshold property for a single device.
   – Reset a threshold property to the IBM-recommended value (if defined) for a set of
     devices of like type.
      IBM-recommended critical and warning values will be provided for all thresholds
      known to indicate potential performance problems for IBM storage devices.



                                    Chapter 1. IBM TotalStorage Productivity Center overview   21
– Reset a threshold property to the IBM-recommended value (if defined) for a single
                    device.
                  Show a summary of threshold properties for all of the devices of like type.
                  View performance data from the Performance Manager database.

               Gauges
               The Performance Manager supports a performance-type gauge. The performance-type
               gauge presents sample-level performance data. The frequency at which performance data is
               sampled on a device depends on the sampling frequency that you specify when you define
               the performance collection task. The maximum and minimum values of the sampling
               frequency depend on the device type. The static display presents historical data over time.
               The refreshable display presents near real-time data from a device that is currently collecting
               performance data.

               The Performance Manager enables a Productivity Center for Disk user to access recent
               performance data in terms of a series of values of one or more metrics associated with a finite
               set of components per device. Only recent performance data is available for gauges. Data
               that has been purged from the database cannot be viewed. You can define one or more
               gauges by selecting certain gauge properties and saving them for later referral. Each gauge
               is identified through a user-specified name and, when defined, a gauge can be started, which
               means that it is then displayed in a separate window of the Productivity Center GUI. You can
               have multiple gauges active at the same time.

               Gauge definition is accomplished through a wizard to aid in entering a valid set of gauge
               properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon
               request. When you request data pertaining to a defined gauge, the Performance Manager
               builds a query to the database, retrieves and formats the data, and returns it to you. When
               started, a gauge is displayed in its own window, and it displays all available performance data
               for the specified initial date/time range. The date/time range can be changed after the initial
               gauge window is displayed.

               For performance-type gauges, if a metric selected for display is associated with a threshold
               enabled for checking, the current threshold properties are also displayed in the gauge window
               and are updated each time the gauge data is refreshed.

               Database services for managing the collected performance data
               The performance data collected from the supported devices is stored in a DB2 database.
               Database services are provided that enable you to manage the potential volumes of data.
                  Database purge function
                  A database purge function deletes older performance data samples and, optionally, the
                  associated exception data. Flexibility is built into the purge function, and it enables you to
                  specify the data to purge, allowing important data to be maintained for trend purposes.
                  You can specify to purge all of the sample data from all types of devices older than a
                  specified number of days. You can specify to purge the data associated with a particular
                  type of device.
                  If threshold checking was enabled at the time of data collection, you can exclude data that
                  exceeded at least one threshold value from being purged.
                  You can specify the number of days that data is to remain in the database before being
                  purged. Sample data and, optionally, exception data older than the specified number of
                  days will be purged.
                  A reorganization function is performed on the database tables after the sample data is
                  deleted from the respective database tables.


22   IBM TotalStorage Productivity Center V2.3: Getting Started
Database information function
              Due to the amount of data collected by the Performance Manager function provided by
              Productivity Center for Disk, the database should be monitored to prevent it from running
              out of space. The database information function returns the database % full. This function
              can be invoked from either the Web user interface or the CLI.

           Volume Performance Advisor
           The advanced performance analysis provided by Productivity Center for Disk is intended to
           address the challenge of allocating more storage in a storage system so that the users of the
           newly allocated storage achieve the best possible performance.

           The Volume Performance Advisor is an automated tool that helps the storage administrator
           pick the best possible placement of a new LUN to be allocated (that is, the best placement
           from a performance perspective). It also uses the historical performance statistics collected
           from the supported devices to locate unused storage capacity on the SAN that exhibits the
           best (estimated) performance characteristics. Allocation optimization involves several
           variables that are user-controlled, such as required performance level and the time of
           day/week/month of prevalent access. This function is fully integrated with the Device Manager
           function so that, for example, when a new LUN is added to the ESS, the Device Manager can
           seamlessly select the best possible LUN.

           Replication Manager function
           Data replication is the core function required for data protection and disaster recovery. It
           provides advanced copy services functions for supported storage subsystems on the SAN.

           Productivity Center for Replication administers and configures the copy services functions
           and monitors the replication actions. Its capabilities consist of the management of two types
           of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote
           Copy), and the Point-in-Time Copy (also known as FlashCopy). Currently replication functions
           are provided for the IBM TotalStorage ESS.

           Productivity Center for Replication includes support for replica sessions, which ensures that
           data on multiple related heterogeneous volumes is kept consistent, provided that the
           underlying hardware supports the necessary primitive operations. Multiple pairs are handled
           as a consistent unit, Freeze-and-Go functions can be performed when errors in mirroring
           occur. Productivity Center for Replication is designed to control and monitor the copy services
           operations in large-scale customer environments.

           Productivity Center for Replication is controlled by applying predefined policies to Groups and
           Pools, which are groupings of LUNs that are managed by the Replication Manager. It
           provides the ability to copy a Group to a Pool, in which case it creates valid mappings for
           source and target volumes and optionally presents them to the user for verification that the
           mapping is acceptable. In this case, it manages Pool membership by removing target
           volumes from the pool when they are used, and by returning them to the pool only if the target
           is specified as being discarded when it is deleted.


1.4.2 Event services
           At the heart of any systems management solution is the ability to alert the system
           administrator in the event of a system problem. IBM Director provides a method of alerting
           called Event Action Plans, which enables the definition of event triggers independently from
           actions that might be taken.




                                               Chapter 1. IBM TotalStorage Productivity Center overview   23
An event is an occurrence of a predefined condition relating to a specific managed object that
               identifies a change in a system process or a device. The notification of that change can be
               generated and tracked (for example, notification that a Productivity Center component is not
               available). Productivity Center for Disk and Productivity Center for Replication take full
               advantage of, and build upon, the IBM Director Event Services.

               The IBM Director includes sophisticated event-handling support. Event Action Plans can be
               set up that specify what steps, if any, should be taken when particular events occur in the
               environment. Director Event Management encompasses the following concepts:
                  Events can be generated by any managed object. IBM Director receives such events and
                  calls appropriate internal event handlers that have been registered.
                  Actions are user-configured steps to be taken for a particular event or type of event. There
                  can be zero or more actions associated with a particular action plan. System
                  administrators can create their own actions by customizing particular predefined actions.
                  Event Filters are a set of characteristics or criteria that determine whether an incoming
                  event should be acted on.
                  Event Action Plans are associations of one or more event filters with one or more actions.
                  Event Action Plans become active when you apply them to a system or a group of
                  systems.

               The IBM Director Console includes an extensive set of GUI panels, called the Event Action
               Plan Builder, that enable the user to create action plans and event filters. Event Filters can be
               configured using the Event Action Plan Builder and set up with a variety of criteria, such as
               event types, event severities, day and time of event occurrence, and event categories. This
               allows control over exactly what action plans are invoked for each specific event.

               Productivity Center provides extensions to the IBM Director event management support.
                  It takes full advantage of the IBM Director built-in support for event logging and viewing.
                  It generates events that will be externalized. Action plans can be created based on filter
                  criteria for these events. The default action plan is to log all events in the event log.
                  It creates additional event families, and event types within those families, that will be listed
                  in the Event Action Plan Builder.
                  Event actions that enable Productivity Center functions to be exploited from within action
                  plans will be provided. An example is the action to indicate the amount of historical data to
                  be kept.



1.5 Taking steps toward an On Demand environment
               So what is an On Demand operating environment? It is not a specific set of hardware and
               software. Rather, it is an environment that supports the needs of the business, allowing it to
               become and remain responsive, variable, focused, and resilient.

               An On Demand operating environment unlocks the value within the IT infrastructure to be
               applied to solving business problems. It is an integrated platform, based on open standards,
               to enable rapid deployment and integration of business applications and processes.
               Combined with an environment that allows true virtualization and automation of the
               infrastructure, it enables delivery of IT capability On Demand.




24   IBM TotalStorage Productivity Center V2.3: Getting Started
An On Demand operating environment must be:
   Flexible
   Self-managing
   Scalable
   Economical
   Resilient
   Based on open standards

The move to an On Demand storage environment is an evolving one, it does not happen all at
once. There are several next steps that you may take to move to the On Demand
environment.
   Constant changes to the storage infrastructure (upgrading or changing hardware for
   example) can be addressed by virtualization which provides flexibility by hiding the
   hardware and software from users and applications.
   Empower administrators with automated tools for managing heterogeneous storage
   infrastructures. and eliminate human error.
   Control storage growth with automated identification and movement of low-activity or
   inactive data to a hierarchy of lower-cost storage.
   Manage cost associated with capturing point-in-time copies of important data for
   regulatory or bookkeeping requirements by maintaining this inactive data in a hierarchy of
   lower-cost storage.
   Ensure recoverability through the automated creation, tracking and vaulting of reliable
   recovery points for all enterprise data.
   The ultimate goal to eliminate human errors by preparing for Infrastructure Orchestration
   software that can be used to automate workflows.

No matter which steps you take to an On Demand environment there will be results. The
results will be improved application availability, optimized storage resource utilization, and
enhanced storage personnel productivity.




                                    Chapter 1. IBM TotalStorage Productivity Center overview     25
26   IBM TotalStorage Productivity Center V2.3: Getting Started
2


    Chapter 2.   Key concepts
                 There are certain industry standards and protocols that are the basis of the IBM TotalStorage
                 Productivity Center. The understanding of these concepts is important for installing and
                 customizing the IBM TotalStorage Productivity Center.

                 In this chapter, we describe the standards on which the IBM TotalStorage Productivity Center
                 is built, as well as the methods of communication used to discover and manage storage
                 devices. We also discuss communication between the various components of the IBM
                 TotalStorage Productivity Center.

                 To help you understand these concepts, we provide diagrams to show the relationship and
                 interaction of the various elements in the IBM TotalStorage Productivity Center environment.




© Copyright IBM Corp. 2005. All rights reserved.                                                            27
2.1 IBM TotalStorage Productivity Center architecture
               This chapter provides an overview of the components and functions that are included in the
               IBM TotalStorage Productivity Center.


2.1.1 Architectural overview diagram
               The architectural overview diagram in Figure 2-1 helps to illustrate the governing ideas and
               building blocks of the product suite which makes up the IBM TotalStorage Productivity Center.
               It provides a logical overview of the main conceptual elements and relationships in the
               architecture, components, connections, users, and external systems.




               Figure 2-1 IBM TotalStorage Productivity Center architecture overview diagram

               IBM TotalStorage Productivity Center and Tivoli Provisioning Manager are presented as
               building blocks in the diagram. Both of the products are not a single application but a complex
               environment by themselves.

               The diagram also shows the different methods used to collect information from multiple
               systems to give an administrator the necessary views on the environment, for example:
                  Software clients (agents)
                  Standard interfaces and protocols (for example, Simple Network Management Protocol
                  (SNMP), Common Information Model (CIM) Agent)
                  Proprietary interfaces (for only a few devices)

               In addition to the central data collection, Productivity Center provides a single point of control
               for a storage administrator, even though each manager still comes with its own interface.
               A program called the Launchpad is provided to start the individual applications from a central
               dashboard.



28   IBM TotalStorage Productivity Center V2.3: Getting Started
The Tivoli Provisioning Manager relies on Productivity Center to make provisioning possible.


2.1.2 Architectural layers
           The IBM TotalStorage Productivity Center architecture can be broken up in three layers as
           shown in Figure 2-2. Layer one represents a high level overview. There is only IBM
           TotalStorage Productivity Center instance in the environment. Layers two and three drill down
           into the TotalStorage Productivity Center environment so you can see the managers and the
           prerequisite components.




           Figure 2-2 Architectural layers

           Layer two consists of the individual components that are part of the product suite:
              IBM TotalStorage Productivity Center for Disk
              IBM TotalStorage Productivity Center for Replication
              IBM TotalStorage Productivity Center for Fabric
              IBM TotalStorage Productivity Center for Data

           Throughout this redbook, these products are referred to as managers or components.

           Layer three includes all the prerequisite components, for example IBM DB2, IBM WebSphere,
           IBM Director, IBM Tivoli NetView, and Tivoli Common Agent Services.

           IBM TotalStorage Productivity Center for Fabric can be installed on a full version of
           WebSphere Application Server or on the embedded WebSphere Application Server, which is
           shipped with Productivity Center for Fabric. Installation on a full version of WebSphere
           Application Server is used when other components of TotalStorage Productivity Center are
           installed on the same logical server. IBM TotalStorage Productivity Center for Fabric can
           utilize an existing IBM Tivoli Netview installation or can be installed along with it.

            Note: Each of the manager and prerequisite components can be drilled down even further,
            but in this book we go into this detail only where necessary. The only exception is Tivoli
            Common Agent Services, which is a new underlying service in the Tivoli product family.


           Terms and definitions
           When you look at the diagram in Figure 2-2, you see that each layer has a different name.
           The following sections explain each of these names as well as other terms commonly used in
           this book.


                                                                             Chapter 2. Key concepts   29
Product
               A product is something that is available to be ordered. The individual products that are
               included in IBM TotalStorage Productivity Center are introduced in Chapter 1, “IBM
               TotalStorage Productivity Center overview” on page 3.

               Components
               Products (licensed software packages) and prerequisite software applications are in general
               called components. Some of the components are internal, meaning that, from the installation
               and configuration point of view, they are somewhat transparent. External components have to
               be separately installed. We usually use the term components for the following applications:
                  IBM Director (external, used by Disk and Replication Manager)
                  IBM DB2 (external, used by all managers)
                  IBM WebSphere Application Server (external, used by Disk and Replication Manager,
                  used by Fabric Manager if installed on the same logical server)
                  Embedded WebSphere Application Server (internal, used by Fabric Manager)
                  Tivoli NetView (internal, used by Fabric Manager)
                  Tivoli Common Agent Services (external, used by Data and Fabric Manager)

               Not all of the internal components are always shown in the diagrams and lists in this book.

               The term subcomponent is used to emphasize that a certain component (the subcomponent)
               belongs to or is used by another component. For example, a Resource Manager is a
               subcomponent of the Fabric or Data Manager.

               Managers
               The managers are the central components of the IBM TotalStorage Productivity Center
               environment. They may share some of the prerequisite components. For example, IBM DB2
               and IBM WebSphere are used by different managers.
               In this book, we sometimes use the following terms:
                  Disk Manager for Productivity Center for Disk
                  Replication Manager for Productivity Center for Replication
                  Data Manager for Productivity Center for Data
                  Fabric Manager for Productivity Center for Fabric

               In addition, we use the term manager for the Agent Manager for Tivoli Agent Manager
               component, because the name of the component already includes that term.

               Agents
               The agents are not shown in the diagram in Figure 2-2 on page 29, but they have an
               important role in the IBM TotalStorage Productivity Center environment. There are two types
               of agents: Common Information Model (CIM) Agents and agents that belong to one of the
               managers:
                  CIM Agents: Agents that offer a CIM interface for management applications, for example,
                  for IBM TotalStorage DS8000 and DS6000 series storage systems, IBM TotalStorage
                  Enterprise Storage Server (ESS), SAN (Storage Area Network) Volume Controller, and
                  DS4000 Storage Systems formerly known as FAStT (Fibre Array Storage Technology)
                  Storage Systems
                  Agents that belong to one of the managers:
                  – Data Agents: Agents to collect data for the Data Manager
                  – Fabric Agents: Agents that are used by the Fabric Manager for inband SAN data
                    discovery and collection


30   IBM TotalStorage Productivity Center V2.3: Getting Started
In addition to these agents, the Service Location Protocol (SLP) also use the term agent for
                these components:
                   User Agent
                   Service Agent
                   Directory Agent

                Elements
                We use the generic term element whenever we do not differentiate between components and
                managers.


2.1.3 Relationships between the managers and components
                An IBM TotalStorage Productivity Center environment includes many elements and is
                complex. This section tries to explain how all the elements work together to form a center for
                storage administration. Figure 2-3 shows the communication between the elements and how
                they relate to each other.

                Each gray box in the diagram represents one machine. The dotted line within a machine
                separates two distinct managers of the IBM TotalStorage Productivity Center.




Figure 2-3 Manager and component relationship diagram

                All these components can also run on one machine. In this case all managers and IBM
                Director will share the same DB2 installation and all managers and IBM Tivoli Agent Manager
                will share the same WebSphere installation.




                                                                                  Chapter 2. Key concepts   31
2.1.4 Collecting data
               Multiple methods are used within the different components to collect data from the devices in
               your environment. In this version of the product, the information is stored in different
               databases (see Table 3-6 on page 62) that are not shared between the individual
               components.

               Productivity Center for Disk and Productivity Center for Replication
               Productivity Center for Disk and Productivity Center for Replication use the Storage
               Management Initiative - Specification (SMI-S) standard (see “Storage Management Initiative -
               Specification” on page 35) to collect information about subsystems. For devices that are not
               CIM ready, this requires the installation of a proxy application (CIM Agent or CIM Object
               Manager (CIMOM)). It does not use its own agent such as the Data Manager and Fabric
               Manager.

               IBM TotalStorage Productivity Center for Fabric
               IBM TotalStorage Productivity Center for Fabric uses two methods to collect information:
               inband and outband discovery. You can use either method or you can use both at the same
               time to obtain the most complete picture of your environment. Using just one of the methods
               will give you incomplete information, but topology information will be available in both cases.

               Outband discovery is the process of discovering SAN information, including topology and
               device data, without using the Fibre Channel data paths. Outband discovery uses SNMP
               queries, invoked over IP network. Outband management and discovery is normally used to
               manage devices such as switches and hubs that support SNMP.

               Inband discovery is the process of discovering information about the SAN, including
               topology and attribute data, through the Fibre Channel data paths. Inband discovery uses the
               following general process:
                  The Agent sends commands through its Host Bus Adapters (HBA) and the Fibre Channel
                  network to gather information about the switches.
                  The switch returns the information through the Fibre Channel network and the HBA to the
                  Agent.
                  The Agent queries the endpoint devices using RNID and SCSI protocols.
                  The Agent returns the information to the Manager over the IP network.
                  The Manager then responds to the new information by updating the database and
                  redrawing the topology map if necessary.

               Internet SCSI (iSCSI) Discovery is an Internet Protocol (IP)-based storage networking
               standard for linking data storage. It was developed by the Internet Engineering Task Force
               (IETF). iSCSI can be used to transmit data over LANs and WANs.




32   IBM TotalStorage Productivity Center V2.3: Getting Started
The discovery paths are shown in parentheses in the diagram in Figure 2-4.




Figure 2-4 Fabric Manager inband and outband discovery paths


IBM TotalStorage Productivity Center for Data
Within the IBM TotalStorage Productivity Center, the data manager is used to collect
information about logical drives, file systems, individual files, database usage, and more.
Agents are installed on the application servers and perform a regular scan to report back the
information. To report on a subsystem level, a SMI-S interface is also built in. This information
is correlated with the data that is gathered from the agents to show the LUNs that a host is
using (an agent must be installed on that host).

In contrast to Productivity Center for Disk and Productivity Center for Replication, the SMI-S
interface in Productivity Center for Data is only used to retrieve information, but not to
configure a device.

 Restriction: The SLP User Agent integrated into the Data Manager uses SLP Directory
 Agents and Service Agents to find services in the local subnet. To discover CIM Agents
 from remote networks, they have to be registered to either the Directory Agent or Service
 Agent, which is located in the local subnet unless routers are configured to also route
 multicast packets.

 You need to add each CIM Agent (that is not discovered) manually to the Data Manager;
 refer to “Configuring the CIM Agents” on page 290.




                                                                    Chapter 2. Key concepts   33
2.2 Standards used in IBM TotalStorage Productivity Center
               This section presents an overview of the standards that are used within IBM TotalStorage
               Productivity Center by the different components. SLP and CIM are described in detail since
               they are new concepts to many people that work with IBM TotalStorage Productivity Center
               and are important to understand.

               Vendor specific tools are available to manage devices in the SAN, but these proprietary
               interfaces are not used within IBM TotalStorage Productivity Center. The only exception is the
               application programming interface (API) that Brocade has made available to manage their
               Fibre Channel switches. This API is used within IBM TotalStorage Productivity Center for
               Fabric.


2.2.1 ANSI standards
               Several standards have been published for the inband management of storage devices, for
               example, SCSI Enclosure Services (SES).

               T11 committee
               Since the 1970s, the objective of the ANSI T11 committee is to define interface standards for
               high-performance and mass storage applications. Since that time, the committee has
               completed work on three projects:
                  High-Performance Parallel Interface (HIPPI)
                  Intelligent Peripheral Interface (IPI)
                  Single-Byte Command Code Sets Connection (SBCON)

               Currently the group is working on Fibre Channel (FC) and Storage Network Management
               (SM) standards.

               Fibre Channel Generic Services
               The Fibre Channel Generic Services (FC-GS-3) Directory Service and the Management
               Service are being used within IBM TotalStorage Productivity Center for the SAN
               management. The availability and level of function depends on the implementation by the
               individual vendor. IBM TotalStorage Productivity Center for Fabric uses this standard.


2.2.2 Web-Based Enterprise Management
               Web-Based Enterprise Management (WBEM) is an initiative of the Distributed Management
               Task Force (DTMF) with the objective to enable the management of complex IT
               environments. It defines a set of management and Internet standard technologies to unify the
               management of complex IT environments.

               The three main conceptual elements of the WBEM initiative are:
                  Common Information Model (CIM)
                  CIM is a formal object-oriented modeling language that is used to describe the
                  management aspects of systems. See also “Common Information Model” on page 47.
                  xmlCIM
                  This is a grammar to describe CIM declarations and messages used by the CIM protocol.
                  Hypertext Transfer Protocol (HTTP)
                  HTTP is used as a way to enable communication between a management application and
                  a device that both use CIM.


34   IBM TotalStorage Productivity Center V2.3: Getting Started
The WBEM architecture defines the following elements:
              CIM Client
              The CIM Client is a management application similar to IBM TotalStorage Productivity
              Center that uses CIM to manage devices. A CIM Client can reside anywhere in the
              network, because it uses HTTP to talk to CIM Object Managers and Agents.
              CIM Managed Object
              A CIM Managed Object is a hardware or software component that can be managed by a
              management application using CIM.
              CIM Agent
              The CIM Agent is embedded into a device or it can be installed on the server using the
              CIM provider as the translator of device’s proprietary commands to CIM calls, and
              interfaces with the management application (the CIM Client). The CIM Agent is linked to
              one device.
              CIM Provider
              A CIM Provider is the element that translates CIM calls to the device-specific commands.
              It is like a device driver. A CIM Provider is always closely linked to a CIM Object Manager
              or CIM Agent.
              CIM Object Manager
              A CIM Object Manager (CIMOM) is a part of the CIM Server that links the CIM Client to
              the CIM Provider. It enables a single CIM Agent to talk to multiple devices.
              CIM Server
              A CIM Server is the software that runs the CIMOM and the CIM provider for a set of
              devices. This approach is used when the devices do not have an embedded CIM Agent.
              This term is often not used. Instead people often use the term CIMOM when they really
              mean the CIM Server.


2.2.3 Storage Networking Industry Association
           The Storage Networking Industry Association (SNIA) defines standards that are used within
           IBM TotalStorage Productivity Center. You can find more information on the Web at:
           http://www.snia.org


           Fibre Channel Common HBA API
           The Fibre Channel Common HBA API is used as a standard for inband storage management.
           It acts as a bridge between a SAN management application like Fabric Manager and the Fibre
           Channel Generic Services. IBM TotalStorage Productivity Center for Fabric Agent uses this
           standard.

           Storage Management Initiative - Specification
           SNIA has fully adopted and enhanced the CIM for Storage Management in its SMI-S.
           SMI-S was launched in mid-2002 to create and develop a universal open interface for
           managing storage devices including storage networks.




                                                                             Chapter 2. Key concepts   35
The idea behind SMI-S is to standardize the management interfaces so that management
               applications can use these and provide cross device management. This means that a newly
               introduced device can be immediately managed as it conforms to the standards. SMI-S
               extends CIM and WBEM with the following features:
                  A single management transport
                  Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this
                  transport in SMI-S.
                  A complete, unified, and rigidly specified object model
                  SMI-S defines profiles and recipes within the CIM that enables a management client to
                  reliably use a component vendor’s implementation of the standard, such as the control of
                  LUNs and zones in the context of a SAN.
                  Consistent use of durable names
                  As a storage network configuration evolves and is re-configured, key long-lived resources,
                  such as disk volumes, must be uniquely and consistently identified over time.
                  Rigorously documented client implementation considerations
                  SMI-S provides client developers with vital information for traversing CIM classes within a
                  device or subsystem and between devices and subsystems such that complex storage
                  networking topologies can be successfully mapped and reliably controlled.
                  An automated discovery system
                  SMI-S compliant products, when introduced in a SAN environment, automatically
                  announce their presence and capabilities to other constituents using SLP (see 2.3.1, “SLP
                  architecture” on page 38).
                  Resource locking
                  SMI-S compliant management applications from multiple vendors can exist in the same
                  storage device or SAN and cooperatively share resources through a lock manager.

               The models and protocols in the SMI-S implementation are platform-independent, enabling
               application development for any platform, and enabling them to run on different platforms. The
               SNIA also provides interoperability tests which help vendors to test their applications and
               devices if they conform to the standard.

               Managers or components that use this standard include:
                  IBM TotalStorage Productivity Center for Disk
                  IBM TotalStorage Productivity Center for Replication
                  IBM TotalStorage Productivity Center for Data


2.2.4 Simple Network Management Protocol
               The SNMP is an Internet Engineering Task Force (IETF) protocol for monitoring and
               managing systems and devices in a network. Functions supported by the SNMP protocol are
               the request and retrieval of data, the setting or writing of data, and traps that signal the
               occurrence of events. SNMP is a method that enables a management application to query
               information from a managed device.

               The managed device has software running that sends and receives the SNMP information.
               This software module is usually called the SNMP agent.




36   IBM TotalStorage Productivity Center V2.3: Getting Started
Device management
           An SNMP manager can read information from an SNMP agent to monitor a device. Therefore
           the device needs to be polled on an interval basis. The SNMP manager can also change the
           configuration of a device, by setting certain values to corresponding variables.

           Managers or components that use these standards include the IBM TotalStorage Productivity
           Center for Fabric.

           Traps
           A device can also be set up to send a notification to the SNMP manager (this is called a trap)
           to asynchronously inform this SNMP manager of a status change. Depending on the existing
           environment and organization, it is likely that your environment already has an SNMP
           management application in place.

           The managers or components that use this standard are:
              IBM TotalStorage Productivity Center for Fabric (sending and receiving of traps)
              IBM TotalStorage Productivity Center for Data can be set up to send traps, but does not
              receive traps
              IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center
              for Replication events can be sent as SNMP traps by utilizing the IBM Director
              infrastructure.

           Management Information Base
           SNMP use a hierarchical structured Management Information Base (MIB) to define the
           meaning and the type of a particular value. An MIB defines managed objects that describe
           the behavior of the SNMP entity, which can be anything from a IP router to a storage
           subsystem. The information is organized in a tree structure.

            Note: For more information about SNMP, refer to TCP/IP Tutorial and Technical Overview,
            GG24-3376.


           IBM TotalStorage Productivity Center for Data MIB file
           For users planning to use the IBM TotalStorage Productivity Center for Data SNMP trap alert
           notification capabilities, an SNMP MIB is included in the server installation. You can find the
           SNMP MIB in the file tivoli_install_directory/snmp/tivoliSRM.MIB.

           The MIB is provided for use by your SNMP management console software. Most SNMP
           management station products provide a program called an MIB compiler that can be used to
           import MIBs. This allows you to better view Productivity Center for Data generated SNMP
           traps from within your management console software. Refer to your management console
           software documentation for instructions on how to compile or import a third-party MIB.


2.2.5 Fibre Alliance MIB
           The Fibre Alliance has defined an MIB for the management of storage devices. The Fibre
           Alliance is presenting the MIB to the IETF standardization. The intention of putting together
           this MIB was to have one MIB that covers most (if not all) of the attributes of storage devices
           from multiple vendors. The idea was to have only one MIB that is loaded onto an SNMP
           manager, one MIB file for each component. However, this requires that all devices comply
           with that standard MIB, which is not always the case.




                                                                             Chapter 2. Key concepts    37
Note: This MIB is not part of IBM TotalStorage Productivity Center.

               To learn more about Fibre Alliance and MIB, refer to the following Web sites:
               http://www.fibrealliance.org
               http://www.fibrealliance.org/fb/mib_intro.htm



2.3 Service Location Protocol (SLP) overview
               The SLP is an IETF standard, documented in Request for Comments (RFCs) 2165, 2608,
               2609, 2610, and 2614. SLP provides a scalable framework for the discovery and selection of
               network services.

               SLP enables the discovery and selection of generic services, which can range in function
               from hardware services such as those for printers or fax machines, to software services such
               as those for file servers, e-mail servers, Web servers, databases, or any other possible
               services that are accessible through an IP network.

               Traditionally, to use a particular service, an end-user or client application needs to supply the
               host name or network IP address of that service. With SLP, however, the user or client no
               longer needs to know individual host names or IP addresses (for the most part). Instead, the
               user or client can search the network for the desired service type and an optional set of
               qualifying attributes.

               For example, a user can specify to search for all available printers that support PostScript,
               based on the given service type (printers), and the given attributes (PostScript). SLP
               searches the user’s network for any matching services and returns the discovered list to the
               user.


2.3.1 SLP architecture
               The SLP architecture includes three major components, a Service Agent (SA), a User Agent
               (UA), and a Directory Agent (DA). The SA and UA are required components in an SLP
               environment, where the SLP DA is optional.

               The SMI-S specification introduces SLP as the method for the management applications (the
               CIM clients) to locate managed objects. In SLP, an SA is used to report to UAs that a service
               that has been registered with the SA is available.

               The following sections describe each of these components.

               Service Agent (SA)
               The SLP SA is a component of the SLP architecture that works on behalf of one or more
               network services to broadcast the availability of those services by using broadcasts. The SA
               replies to external service requests using IP unicasts to provide the requested information
               about the registered services, if it is available.




38   IBM TotalStorage Productivity Center V2.3: Getting Started
The SA can run in the same process or in a different process as the service itself. In either
case, the SA supports registration and de-registration requests for the service (as shown in
the right part of Figure 2-5). The service registers itself with the SA during startup, and
removes the registration for itself during shutdown. In addition, every service registration is
associated with a life-span value, which specifies the time that the registration will be active.
In the left part of the diagram, you can see the interaction between a UA and the SA.




Figure 2-5 SLP SA interactions (without SLP DA)

A service is required to reregister itself periodically, before the life-span of its previous
registration expires. This ensures that expired registration entries are not kept. For instance, if
a service becomes inactive without removing the registration for itself, that old registration is
removed automatically when its life span expires. The maximum life span of a registration is
65535 seconds (about 18 hours).

User Agent (UA)
The SLP UA is a process working on the behalf of the user to establish contact with some
network service. The UA retrieves (or queries for) service information from the Service
Agents or Directory Agents.

The UA is a component of SLP that is closely associated with a client application or a user
who is searching for the location of one or more services in the network. You can use the SLP
UA by defining a service type that you want the SLP UA to locate. The SLP UA then retrieves
a set of discovered services, including their service Uniform Resource Locator (URL) and any
service attributes. You can then use the service’s URL to connect to the service.

The SLP UA locates the registered services, based on a general description of the services
that the user or client application has specified. This description usually consists of a service
type, and any service attributes, which are matched against the service URLs registered in
the SLP Service Agents.

The SLP UA usually runs in the same process as the client application, although it is not
necessary to do so. The SLP UA processes find requests by sending out multicast messages
to the network and targeting all SLP SAs within the multicast range with a single User
Datagram Protocol (UDP) message. The SLP UA can, therefore, discover these SAs with a
minimum of network overhead. When an SA receives a service request, it compares its own
registered services with the requested service type and any service attributes, if specified,
and returns matches to the UA using a unicast reply message.




                                                                     Chapter 2. Key concepts    39
The SLP UA follows the multicast convergence algorithm and sends repeated multicast
               messages until no new replies are received. The resulting set of discovered services,
               including their service URL and any service attributes, are returned to the client application or
               user. The client application or user is then responsible for contacting the individual services,
               as needed, using the service’s URL (see Figure 2-6).




               Figure 2-6 SLP UA interactions without SLP DA

               An SLP UA is not required to discover all matching services that exist in the network, but only
               enough of them to provide useful results. This restriction is mainly due to the transmission
               size limits for UDP packets. They can be exceeded when there are many registered services
               or when the registered services have lengthy URLs or a large number of attributes.

               However, in most modern SLP implementations, the UAs can recognize truncated service
               replies and establish TCP connections to retrieve all of the information of the registered
               services. With this type of UA and SA implementation, the only exposure that remains is when
               there are too many SAs within the multicast range. This can cut short the multicast
               convergence mechanism. This exposure can be mitigated by the SLP administrator by setting
               up one or more SLP DAs.

               Directory Agent
               The SLP DA is an optional component of SLP that collects and caches network service
               broadcasts. The DA is primarily used to simplify SLP administration and to improve SLP
               performance.

               You can consider the SLP DA as an intermediate tier in the SLP architecture. It is placed
               between the UAs and the SAs so that both UAs and SAs communicate only with the DA
               instead of with each other. This eliminates a large portion of the multicast request or reply
               traffic in the network. It also protects the SAs from being overwhelmed by too many service
               requests if there are many UAs in the environment.




40   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 2-7 shows the interactions of the SLP UAs and SAs in an environment with SLP DAs.




Figure 2-7 SLP User Agent interactions with User Agent and Service Agent

When SLP DAs are present, the behavior of both SAs and UAs changes significantly. When
an SA is first initializing, it performs a DA discovery using a multicast service request. It also
specifies the special, reserved service type service:directory-agent. This process is also
called active DA discovery. It is achieved through the same mechanism as any other
discovery using SLP.

Similarly, in most cases, an SLP UA also performs active DA discovery using multicasting
when it first starts. However, if the SLP UA is statically configured with one or more DA
addresses, it uses those addresses instead. If it is aware of one or more DAs, either through
static configuration or active discovery, it sends unicast service requests to those DAs instead
of multicasting to SAs. The DA replies with unicast service replies, providing the requested
service URLs and attributes. Figure 2-8 shows the interactions of UAs and SAs with DAs,
during active DA discovery.




Figure 2-8 SLP Directory Agent discovery interactions




                                                                     Chapter 2. Key concepts    41
The SLP DA functions similarly to an SLP SA, receiving registration and deregistration
               requests, and responding to service requests with unicast service replies. There are a couple
               of differences, where DAs provide more functionality than SAs. One area, mentioned
               previously, is that DAs respond to service requests of the service:directory-agent service type
               with a DA advertisement response message, passing back a service URL containing the DA’s
               IP address. This allows SAs and UAs to perform active discovery on DAs.

               One other difference is that when a DA first initializes, it sends a multicast DA advertisement
               message to advertise its services to any existing SAs (and UAs) that may already be active in
               the network. UAs can optionally listen for, and SAs are required to listen for, such
               advertisement messages. This listening process is also sometimes called passive DA
               discovery. When the SA finds a new DA through passive DA discovery, it sends registration
               requests for all its currently registered services to that new DA.

               Figure 2-9 shows the interactions of DAs with SAs and UAs, during passive DA discovery.




               Figure 2-9 Service Location Protocol passive DA discovery

               Why use an SLP DA?
               The primary reason to use DAs is to reduce the amount of multicast traffic involved in service
               discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved
               in service discovery can become so large that network performance degrades. By deploying
               one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using
               unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA
               discovery.

               SAs register automatically with any DAs they discover within a set of common scopes.
               Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for
               normal UA request, delays and timeouts are eliminated.

               DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection
               of scopes provides a centralized point for monitoring SLP activity. You can deploy any number
               of DAs for a particular scope or scopes, depending on the need to balance the load.

               In networks without multicasting enabled, you can configure SLP to use broadcast. However,
               broadcast is inefficient, because it requires each host to process the message. Broadcast
               also does not normally propagate across routers. As a result, in a network without multicast,
               DAs can be deployed on multihomed hosts to bridge SLP advertisements between the
               subnets.




42   IBM TotalStorage Productivity Center V2.3: Getting Started
When to use DAs
Use DAs in your enterprise when any of the following conditions are true:
   Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by
   snoop.
   UA clients experience long delays or timeouts during multicast service request.
   You want to centralize monitoring of SLP service advertisements for particular scopes on
   one or several hosts.
   Your network does not have multicast enabled and consists of multiple subnets that must
   share services.

SLP communication
SLP uses three methods to send messages across an IP network: unicast, broadcast, or
multicast. Data can be sent to one single destination (unicast) or to multiple destinations that
are listening at the same time (multicast). The difference between a multicast and a broadcast
is quite important. A broadcast addresses all stations in a network. Multicast messages are
only used by those stations that are members of a multicast group (that have joined a
multicast group).

Unicast
The most common communication method, unicast, requires that a sender of a message
identifies one and only one target of that message. The target IP address is encoded within
the message packet, and is used by the routers along the network path to route the packet to
the proper destination.

If a sender wants to send the same message to multiple recipients, then multiple messages
must be generated and placed in the network, one message per recipient. When there are
many potential recipients for a particular message, then this places an unnecessary strain on
the network resources, since the same data is duplicated many times, where the only
difference is the target IP address encoded within the messages.

Broadcast
In cases where the same message must be sent to many targets, broadcast is a much better
choice than unicast, since it puts much less strain in the network. Broadcasting uses a special
IP address, 255.255.255.255, which indicates that the message packet is intended to be sent
to all nodes in a network. As a result, the sender of a message needs to generate only a
single copy of that message, and can still transmit it to multiple recipients, that is to all
members of the network.

The routers multiplex the message packet, as it is sent along all possible routes in the
network to reach all possible destinations. This puts much less strain on the network
bandwidth, since only a single message stream enters the network, as opposed to one
message stream per recipient. However, it puts much more strain on the individual nodes
(and routers) in the network, since every node receives the message, even though most likely
not every node is interested in the message. This means that those members of the network
that were not the intended recipients, who receive the message anyway, must receive the
unwanted message and discard it. Due to this inefficiency, in most network configurations,
routers are configured to not forward any broadcast traffic. This means that any broadcast
messages can only reach nodes on the same subnet as the sender.

Multicast
The ability of the SLP to automatically discover services that are available in the network,
without a lot of setup or configuration, depends in a large part on the use of IP multicasting. IP
multicasting is a broad subject in itself, and only a brief and simple overview is provided here.

                                                                    Chapter 2. Key concepts    43
Multicasting can be thought of as more sophisticated broadcast, which aims to solve some of
               the inefficiencies inherent in the broadcasting mechanism. With multicasting, again the
               sender of a message has to generate only a single copy of the message, saving network
               bandwidth. However unlike broadcasting, with multicasting, not every member of the network
               receives the message. Only those members who have explicitly expressed an interest in the
               particular multicast stream receive the message.

               Multicasting introduces a concept called a multicast group, where each multicast group is
               associated with a specific IP address. A particular network node (host) can join one or more
               multicast groups, which notifies the associated router or routers that there is an interest in
               receiving multicast streams for those groups. When the sender, who does not necessarily
               have to be part of the same group, sends messages to a particular multicast group, that
               message is routed appropriately to only those subnets, which contain members of that
               multicast group. This avoids flooding the entire network with the message, as is the case for
               broadcast traffic.

               Multicast addresses
               The Internet Assigned Numbers Authority (IANA), which controls the assignment of IP
               addresses, has assigned the old Class D IP address range to be used for IP multicasting. Of
               this entire range, which extends from 224.0.0.0 to 239.255.255.255, the 224.0.0.* addresses
               are reserved for router management and communication. Some of the 224.0.1.* addresses
               are reserved for particular standardized multicast applications. Each of the remaining
               addresses corresponds to a particular general purpose multicast group.

               The Service Location Protocol uses address 239.255.255.253 for all its multicast traffic. The
               port number for SLP is 427, for both unicast and multicast.

               Configuration recommendations
               Ideally, after IBM TotalStorage Productivity Center is installed, it would discover all storage
               devices that it can physically reach over the IP network. However in most situations, this is
               not the case. This is primarily due to the previously mentioned limitations of multicasting and
               the fact that the majority of routers have multicasting disabled by default. As a result, in most
               cases without any additional configuration, IBM TotalStorage Productivity Center discovers
               only those storage devices that reside in its own subnet, but no more. The following sections
               provide some configuration recommendations to enable TotalStorage Productivity Center to
               discover a larger set of storage devices.

               Router configuration
               The vast majority of the intelligence that allows multicasting to work is implemented in the
               router operating system software. As a result, it is necessary to properly configure the routers
               in the network to allow multicasting to work effectively. Unfortunately, there is a dizzying array
               of protocols and algorithms which can be used to configure particular routers to enable
               multicasting. These are the most common ones:
                  Internet Group Management Protocol (IGMP) is used to register individual hosts in
                  particular multicast groups, and to query group membership on particular subnets.
                  Distance Vector Multicast Routing Protocol (DVMRP) is a set of routing algorithms that
                  use a technique called Reverse Path Forwarding to decide how multicast packets are to
                  be routed in the network.
                  Protocol-Independent Multicast (PIM) comes in two varieties: dense mode (PIM-DM) and
                  sparse mode (PIM-SM). They are optimized to networks where either a large percentage
                  of nodes require multicast traffic (dense), or a small percentage require the traffic
                  (sparse).




44   IBM TotalStorage Productivity Center V2.3: Getting Started
Multicast Open Shortest Path First (MOSPF) is an extension of OSPF, a “link-state”
   unicast routing protocol that attempts to find the shortest path between any two networks
   or subnets to provide the most optimal routing of packets.

The routers of interest are all those which are associated with subnets that contain one or
more storage devices which are to be discovered and managed by TotalStorage Productivity
Center. You can configure the routers in the network to enable multicasting in general, or at
least to allow multicasting for the SLP multicast address, 239.255.255.253, and port, 427.
This is the most generic solution and permits discovery to work the way that it was intended
by the designers of SLP.

To properly configure your routers for multicasting, refer to your router manufacturer’s
reference and configuration documentation. Although older hardware may not support
multicasting, all modern routers do. However, in most cases, multicast support is disabled by
default, which means that multicast traffic is sent only among the nodes of a subnet but is not
forwarded to other subnets. For SLP, this means that service discovery is limited to only those
agents which reside in the same subnet.

Firewall configuration
In the case where one or more firewalls are used between TotalStorage Productivity Center
and the storage devices that are to be managed, the firewalls need to be configured to pass
traffic in both directions, as SLP communication is two way. This means that when
TotalStorage Productivity Center, for example, queries an SLP DA that is behind a firewall for
the registered services, the response will not use an already opened TCP/IP session but will
establish another connection in the direction from the SLP DA to the TotalStorage
Productivity Center. For this reason, port 427 should be opened in both directions, otherwise
the response will not be received and TotalStorage Productivity Center will not recognize
services offered by this SLP DA.

SLP DA configuration
If router configuration is not feasible, another technique is to use SLP DAs to circumvent the
multicast limitations. Since with statically configured DAs, all service requests are unicast
instead of multicast by the UA, it is possible to simply configure one DA for each subnet that
contains storage devices which are to be discovered by TotalStorage Productivity Center.

One DA is sufficient for each of such subnets, although more can be configured without harm,
perhaps for reasons of fault tolerance. Each of these DAs can discover all services within its
own subnet, but no other services outside its own subnet. To allow Productivity Center to
discover all of the devices, you must statically configure it with the addresses of each of these
DAs. You accomplish this using the IBM Director GUI’s Discovery Preference panel. From the
MDM SLP Configuration tab, you can enter a list of DA addresses.

As described previously, Productivity Center unicasts service requests to each of these
statically configured DAs, but also multicasts service requests on the local subnet on which
Productivity Center is installed. Figure 2-10 on page 46 displays a sample environment where
DAs have been used to bridge the multicast gap between subnets in this manner.

 Note: At this time, you cannot set up IBM TotalStorage Productivity Center for Data to use
 remote DAs such as Productivity Center for Disk and Productivity Center for Replication.
 You need to define all remote CIM Agents by creating a new entry in the CIMOM Login
 panel or you can register remote services in DA which resides in local subnet. Refer to
 “Configuring the CIM Agents” on page 290 for detailed information.




                                                                    Chapter 2. Key concepts   45
Figure 2-10 Recommended SLP configuration

               You can easily configure an SLP DA by changing the configuration of the SLP SA included as
               part of an existing CIM Agent installation. This causes the program that normally runs as an
               SLP SA to run as an SLP DA instead. The procedure to perform this configuration is
               explained in 6.2, “SLP DA definition” on page 248. Note that the change from SA to DA does
               not affect the CIMOM service of the subject CIM Agent, which continues to function as
               normal, sending registration and de-registration commands to the DA directly.

               SLP configuration with services outside local subnet

               SLA DA and SA can also be configured to cache CIM services information from non-local
               subnets. Usually CIM Agents or CIMOMs will have local SLP SA function. When there is a
               need to discover CIM services outside the local subnet and the network configuration does
               not permit the use of SLP DA in each of them (for example, firewall rules do not allow two way
               communication on port 427), remote services can be registered on the SLP DA in the local
               subnet.

               This configuration can be done by using slptool, which is part of SLP installation packages.
               Such registration is not persistent across system restarts. To achieve persistent registration of
               services outside of the local subnet, these services need to be defined in the registration file
               used by SLP DA at startup.

               Refer to 5.7.3, “Setting up the Service Location Protocol Directory Agent” on page 221 for
               information on setting up the slp.reg file.




46   IBM TotalStorage Productivity Center V2.3: Getting Started
2.3.2 Common Information Model
          The CIM Agent provides a means by which a device can be managed by common building
          blocks rather than proprietary software. If a device is CIM-compliant, software that is also
          CIM-compliant can manage the device. Vendor applications can benefit from adopting the
          common information model because they can manage CIM-compliant devices in a common
          way, rather than using device-specific programming interfaces. Using CIM, you can perform
          tasks in a consistent manner across devices and vendors.

          CIM uses schemas as a kind of class library to define objects and methods. The schemas
          can be categorized into three types:
             Core schema: Defines classes and relationships of objects
             Common schema: Defines common components of systems
             Extension schema: Entry point for vendors to implement their own schema

          The CIM/WBEM architecture defines the following elements:
             Agent code or CIM Agent
             An open-systems standard that interprets CIM requests and responses as they transfer
             between the client application and the device. The Agent is embedded into a device, which
             can be hardware or software.
             CIM Object Manager
             The common conceptual framework for data management that receives, validates, and
             authenticates the CIM requests from the client application. It then directs the requests to
             the appropriate component or a device provider such as a CIM Agent.
             Client application or CIM Client
             A storage management program, such as TotalStorage Productivity Center, that initiates
             CIM requests to the CIM Agent for the device. A CIM Client can reside anywhere in the
             network, because it uses HTTP to talk to CIM Object Managers and Agents.
             Device or CIM Managed Object
             A Managed Object is a hardware or software component that can be managed by a
             management application by using CIM, for example, a IBM SAN Volume Controller.
             Device provider
             A device-specific handler that serves as a plug-in for the CIMOM. That is, the CIMOM
             uses the handler to interface with the device.

           Note: The terms CIM Agent and CIMOM are often used interchangeably. At this time, few
           devices come with an integrated CIM Agent. Most devices need a external CIMOM for CIM
           to enable management applications (CIM Clients) to talk to the device.

           For ease of installation, IBM provides an Integrated Configuration Agent Technology
           (ICAT), which is a bundle that includes the CIMOM, the device provider, and an SLP SA.


          Integrating legacy devices into the CIM model
          Since these standards are still evolving, we cannot expect that all devices will support the
          native CIM interface. Because of this, the SMI-S is introducing CIM Agents and CIM Object
          Managers. The agents and object managers bridge proprietary device management to device
          management models and protocols used by SMI-S. The agent is used for one device and an
          object manager for a set of devices. This type of operation is also called proxy model and is
          shown in Figure 2-11 on page 48.



                                                                            Chapter 2. Key concepts    47
The CIM Agent or CIMOM translates a proprietary management interface to the CIM
               interface. The CIM Agent for the IBM TotalStorage ESS includes a CIMOM inside it.

               In the future, more and more devices will be native CIM compliant, and will therefore have a
               built-in Agent as shown in the “Embedded Model” in Figure 2-11. When widely adopted,
               SMI-S will streamline the way that the entire storage industry deals with management.
               Management application developers will no longer have to integrate incompatible
               feature-poor interfaces into their products. Component developers will no longer have to push
               their unique interface functionality to application developers. Instead, both will be better able
               to concentrate on developing features and functions that have value to end-users. Ultimately,
               faced with reduced costs for management, end-users will be able to adopt storage-networking
               technology faster and build larger, more powerful networks.



                                                        CIM Client
                                                       Management
                                                       Application
                                                                      0..n




                                                       CIMxml
                                           CIM operations over http [TCP/IP]



                           Agent                                             Object Manager
                                    0..n                                       Provider     0..n
                            1
                                Proprietary                                      1
                            1                                                    n
                                                                                     Proprietary
                                                       Agent
                          Device or                   Device or
                         Subsystem                                              Device or
                                                     Subsystem 0..n            Subsystem
                      Proxy Model               Embedded Model               Proxy Model

               Figure 2-11 CIM Agent and Object Manager overview


               CIM Agent implementation
               When a CIM Agent implementation is available for a supported device, the device may be
               accessed and configured by management applications using industry-standard
               XML-over-HTTP transactions. This interface enables IBM TotalStorage Productivity Center
               for Data, IBM TotalStorage Productivity Center for Disk, IBM TotalStorage Productivity Center
               for Replication, IBM Director, and vendor tools to manage the SAN infrastructure more
               effectively.

               By implementing a standard interface over all devices, an open environment is created in
               which tools from a variety of vendors can work together. This reduces the cost of developing
               integrated management applications, installing and configuring management applications,
               and managing the SAN infrastructure. Figure 2-12 on page 49 shows an overview of the CIM
               Agent.




48   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 2-12 CIM Agent overview

          The CIM Agent includes a CIMOM, which adapts various devices using a plug-in called a
          provider. The CIM Agent can work as a proxy or can be embedded in storage devices. When
          the CIM Agent is installed as a proxy, the IBM CIM Agent can be installed on the same server
          that supports the device user interface.

          CIM Object Manager
          The SNIA SMI-S standard designates that either a proxy or an embedded agent may be used
          to implement CIM. In each case, the CIM objects are supported by a CIM Object Manager.
          External applications communicate with CIM through HTTP to exchange XML messages that
          are used to configure and manage the device.

          In a proxy configuration, the CIMOM runs outside of the device and can manage multiple
          devices. In this case, a provider component is installed into the CIMOM to enable the CIMOM
          to manage specific devices such as the ESS or SAN Volume Controller. The providers adapt
          the CIMOM to work with different devices and subsystems. In this way, a single CIMOM
          installation can be used to access more than one device type and more than one device of
          each type on a subsystem.

          The CIMOM acts as a catcher for requests that are sent from storage management
          applications. The interactions between the catcher and sender use the language and models
          defined by the SMI-S standard. This enables storage management applications, regardless of
          vendor, to query status and perform command and control using XML-based CIM
          interactions.



2.4 Component interaction
          This section provides an overview of the interactions between the different components by
          using standardized management methods and protocols.


2.4.1 CIMOM discovery with SLP
          The SMI-S specification introduces SLP as the method for the management applications (the
          CIM clients) to locate managed objects. SLP is explained in more detail in 2.3, “Service
          Location Protocol (SLP) overview” on page 38. Figure 2-13 on page 50 shows the interaction
          between CIMOMs and SLP components.




                                                                          Chapter 2. Key concepts     49
Lock                     CIM Client                      Directory
                               Manager                  Management                       Manager
                                                        Application
                               SA
                               SA                  UA                  0..n           DA
                                        0..n                                                     0..n

                                                      SLP TCP/IP


                                                       CIMxml
                                           CIM operations over http [TCP/IP]

                                                                              SA
                      SA
                           Agent                                              Object Manager
                                    0..n                                           Provider     0..n
                           1
                           1
                                Proprietary      SA                                  1
                                                                                           Proprietary
                                                                                     n
                                                        Agent
                         Device or                     Device or
                        Subsystem                                                   Device or
                                                      Subsystem 0..n               Subsystem
                      Proxy Model               Embedded Model                 Proxy Model

               Figure 2-13 SMI-S extensions to WBEM/CIM


2.4.2 How CIM Agent works
               The CIM Agent typically works as explained in the following sequence and as shown in
               Figure 2-14 on page 51:
               1. The client application locates the CIMOM by calling an SLP directory service.
               2. The CIMOM is invoked.
               3. The CIMOM registers itself to the SLP and supplies its location, IP address, port number,
                  and the type of service it provides.
               4. With this information, the client application starts to directly communicate with the CIMOM.
               5. The client application sends CIM requests to the CIMOM. As requests arrive, the CIMOM
                  validates and authenticates each request.
               6. The CIMOM directs the requests to the appropriate functional component of the CIMOM
                  or to a device provider.
               7. The provider makes calls to a device-unique programming interface on behalf of the
                  CIMOM to satisfy client application requests.
               8. — 10. The client application requests are made.




50   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 2-14 CIM Agent work flow



2.5 Tivoli Common Agent Services
        The Tivoli Common Agent Services is a new concept with the goal to provide a set of
        functions for the management of agents that will be common to all Tivoli products. At the time
        of this writing, IBM TotalStorage Productivity Center for Fabric and IBM TotalStorage
        Productivity Center for Data are the first applications that use this new concept. See
        Figure 2-15 on page 52 for an overview of the three elements in the Tivoli Common Agent
        Services infrastructure.

        In each of the planning and installation guides of the Productivity Center for Fabric and
        Productivity Center for Data, there is a chapter that provides information about the benefits,
        system requirements and sizing, security considerations, and the installation procedures.

        The Agent Manager is the central network element, that together with the distributed
        Common Agents, builds an infrastructure which is used by other applications to deploy and
        manage an agent environment. Each application uses a Resource Manager that is built into
        the application server (Productivity Center for Data or Productivity Center for Fabric) to
        integrate in this environment.

         Note: You can have multiple Resource Managers of the same type using a single Agent
         Manager. This may be necessary to scale the environment when, for example, one Data
         Manager cannot handle the load any more.

         The Agents will be managed by only one of the Data Managers as in this example.


                                                                          Chapter 2. Key concepts    51
Figure 2-15 Tivoli Common Agent Services

               The Common Agent provides the platform for the application specific agents. Depending on
               the tasks for which a subagent is used, the Common Agent is installed on the customers’
               application servers, desktop PCs, or notebooks.

                Note: In different documentation, Readme files, directory and file names, you also see the
                terms Common Endpoint, Endpoint, or simply EP. This always refers to the Common
                Agent, which is part of the Tivoli Common Agent Services.

               The Common Agent talks to the application specific subagent, with the Agent Manager and
               the Resource Manager, but the actual system level functions are invoked by the subagent.
               The information that the subagent collects is sent directly to the Resource Manager by using
               the application’s native protocol. This is enabled to have down-level agents in the same
               environment, as the new agents that are shipped with the IBM TotalStorage Productivity
               Center.

               Certificates are used to validate if a requester is allowed to establish a communication. Demo
               keys are supplied to quickly set up and configure a small environment, since every installation
               CD uses the same certificates, this is not secure. If you want to use Tivoli Common Agent
               Services in a production environment, we recommend that you use your own keys that can be
               created during the Tivoli Agent Manager installation.

               One of the most important certificates is stored in the agentTrust.jks file. The certificate can
               also be created during the installation of Tivoli Agent Manager. If you do not use the demo
               certificates, you need to have this file available during the installation of the Common Agent
               and the Resource Manager.

               This file is locked with a password (the agent registration password) to secure the access to
               the certificates. You can use the ikeyman utility in the javajre subdirectory to verify your
               password.

52   IBM TotalStorage Productivity Center V2.3: Getting Started
2.5.1 Tivoli Agent Manager
           The Tivoli Agent Manager requires a database to store information in what is called the
           registry. Currently there are three options for installing the database: using IBM
           Cloudscape™ (provided on the installation CD), a local DB2 database, or a remote DB2
           database. Since the registry does not contain much information, using the Cloudscape
           database is OK. In our setup described later in the book, we chose a local DB2 database,
           because the DB2 database was required for another component that was installed on the
           same machine.

           WebSphere Application Server is the second prerequisite for the Tivoli Agent Manager. This
           is installed if you use the Productivity Center Suite Installer or if you choose to use the Tivoli
           Agent Manager installer. We recommend that you do not install WebSphere Application
           Server manually.

           Three dedicated ports are used by the Agent Manager (9511-9513). Port 9511 is the most
           important port because you have to enter this port during the installation of a Resource
           Manager or Common Agent, if you choose to change the defaults.

           When the WebSphere Application Server is being installed, make sure that the Microsoft
           Internet Information Server (IIS) is not running, or even better that it is not installed. Port 80 is
           used by the Tivoli Agent Manager for the recovery of agents that can no longer communicate
           with the manager, because of lost passwords or certificates. This Agent Recovery Service is
           located by a DNS entry with the unqualified host name of TivoliAgentRecovery.

           Periodically, check the Agent Manager log for agents that are unable to communicate with the
           Agent Manager server. The recovery log is in the %WAS_INSTALL_ROOT%AgentManager
           logsSystemOut.log file. Use the information in the log file to determine why the agent could
           not register and then take corrective action.

           During the installation, you also have to specify the agent registration password and the
           Agent Registration Context Root. The password is stored in the AgentManager.properties file
           on the Tivoli Agent Manager. This password is also used to lock the agentTrust.jks certificate
           file.

            Important: A detailed description about how to change the password is available in the
            corresponding Resource Manager Planning and Installation Guide. Since this involves
            redistributing the agentTrust.jks files to all Common Agents, we encourage you to use your
            own certificates from the beginning.

           To control the access from the Resource Manager to the Common Agent, certificates are
           used to make sure that only an authorized Resource Manager can install and run code on a
           computer system. This certificate is stored in the agentTrust.jks and locked with the agent
           registration password.


2.5.2 Common Agent
           As mentioned earlier, the Common Agent is used as a platform for application specific
           agents. These agents sometimes are called subagents.

           The subagents can be installed using two different methods:
              Using an application specific installer
              From a central location once the Common Agent is installed




                                                                                 Chapter 2. Key concepts     53
When you install the software, the agent has to register with the Tivoli Agent Manager. During
               this procedure, you need to specify the registration port on the manager (by default 9511).
               Furthermore, you need to specify an agent registration password. This registration is
               performed by the Common Agent, which is installed automatically if not already installed.

               If the subagent is deployed from a central location, the port 9510 is by default used by the
               installer (running on the central machine), to communicate with the Common Agent to
               download and install the code. When this method is used, no password or certificate is
               required, because these were already provided during the Common Agent installation on the
               machine.

               If you choose to use your own certificate during the Tivoli Agent Manager installation, you
               need to supply it for the Common Agent installation.




54   IBM TotalStorage Productivity Center V2.3: Getting Started
Part 2


Part       2     Installing the
                 IBM TotalStorage
                 Productivity Center
                 base product suite
                 In this part of the book we provide information to help you successfully install the prerequisite
                 products that are required before you can install the IBM TotalStorage Productivity Center
                 product suite. This includes installing:
                     DB2
                     IBM Director
                     WebSphere Application Server
                     Tivoli Agent Manager
                     IBM TotalStorage Productivity Center for Disk
                     IBM TotalStorage Productivity Center for Replication




© Copyright IBM Corp. 2005. All rights reserved.                                                               55
56   IBM TotalStorage Productivity Center V2.3: Getting Started
3


    Chapter 3.   Installation planning and
                 considerations

                 IBM TotalStorage Productivity Center is made up of several products which can be installed
                 individually, as a complete suite, or any combination in between. By installing multiple
                 products, a synergy is created which allows the products to interact with each other to provide
                 a more complete solution to help you meet your business storage management objectives.

                 This chapter contains information that you will need before beginning the installation. It also
                 discusses the supported environments and pre-installation tasks.




© Copyright IBM Corp. 2005. All rights reserved.                                                               57
3.1 Configuration
               You can install the storage management components of IBM TotalStorage Productivity
               Center on a variety of platforms. However, for the IBM TotalStorage Productivity Center suite,
               when all four manager components are installed on the same system, the only common
               platforms for the managers are:
                  Windows 2000 Server with Service Pack 4
                  Windows 2000 Advanced Server
                  Windows 2003 Enterprise Edition

                Note: Refer to the following Web site for the updated support summaries, including specific
                software, hardware, and firmware levels supported:
                http://www.storage.ibm.com/software/index.html


               If you are using the storage provisioning workflows, you must install IBM TotalStorage
               Productivity Center for Disk and TotalStorage Productivity Center for Replication and IBM
               TotalStorage Productivity Center for Fabric on the same machine.

               Because of processing requirements, we recommend that you install IBM Tivoli Provisioning
               Manager on a separate Windows machine.



3.2 Installation prerequisites
               This section lists the minimum prerequisites for installing IBM TotalStorage Productivity
               Center.

               Hardware
               The following hardware is required:
                  Dual Pentium® 4 or Intel® Xeon 2.4 GHz or faster processors
                  4 GB of DRAM
                  Network connectivity
                  Subsystem Device Driver (SDD), for IBM TotalStorage Productivity Center for Fabric
                  (optional)
                  5 GB available disk space.

               Database
               You must comply with the following database requirements:
                  The installation of DB2 Version 8.2 is part of the Prerequisite Software Installer and is
                  required by all the managers.
                  Other databases that are supported are:
                  – For IBM TotalStorage Productivity Center for Fabric:
                      •   IBM Cloudscape 5.1.60 (provided on the CD)
                  – For IBM TotalStorage Productivity Center for Data:
                      •   Microsoft SQL Server Version 7.0, 2000
                      •   Oracle 8i, 9i, 9i V2
                      •   Sybase SQL Server (Adaptive Server Enterprise) Version 12.5 or higher
                      •   IBM Cloudscape 5.1.60 (provided on the CD)


58   IBM TotalStorage Productivity Center V2.3: Getting Started
3.2.1 TCP/IP ports used by TotalStorage Productivity Center
           This section provides an overview of the TCP/IP ports used by IBM TotalStorage Productivity
           Center.

           TCP/IP ports used by Disk and Replication Manager
           The IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center
           for Replication Manager installation program preconfigures the TCP/IP ports used by
           WebSphere. Table 3-1 lists the values that correspond to the WebSphere ports.

           Table 3-1 TCP/IP ports for IBM TotalStorage Productivity Center for Disk and Replication Base
            Port value                                          WebSphere ports

            427                                                 SLP port

            2809                                                Bootstrap port

            9080                                                HTTP Transport port

            9443                                                HTTPS Transport port

            9090                                                Administrative Console port

            9043                                                Administrative Console Secure Server port

            5559                                                JMS Server Direct Address port

            5557                                                JMS Server Security port 5

            5558                                                JMS Server Queued Address port

            8980                                                SOAP Connector Address port

            7873                                                DRS Client Address port


           TCP/IP ports used by Agent Manager
           The Agent Manager uses the TCP/IP ports listed in Table 3-2.

           Table 3-2 TCP/IP ports for Agent Manager
            Port value        Usage

            9511              Registering agents and resource managers

            9512              Providing configuration updates
                              Renewing and revoking certificates
                              Querying the registry for agent information
                              Requesting ID resets

            9513              Requesting updates to the certificate revocation list
                              Requesting Agent Manager information
                              Downloading the truststore file

            80                Agent recovery service




                                                        Chapter 3. Installation planning and considerations   59
TCP/IP ports used by IBM TotalStorage Productivity Center for Fabric
               The Fabric Manager uses the default TCP/IP ports listed in Table 3-3.

               Table 3-3 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric
                Port value          Usage

                8080                NetView Remote Web console

                9550                HTTP port

                9551                Reserved

                9552                Reserved

                9553                Cloudscape server port

                9554                NVDAEMON port

                9555                NVREQUESTER port

                9556                SNMPTrapPort port on which to get events forwarded from Tivoli NetView

                9557                Reserved

                9558                Reserved

                9559                Tivoli NetView Pager daemon

                9560                Tivoli NetView Object Database daemon

                9661                Tivoli NetView Topology Manager daemon

                9562                Tivoli NetView Topology Manager socket

                9563                Tivoli General Topology Manager

                9564                Tivoli NetView OVs_PMD request services

                9565                Tivoli NetView OVs_PMD management services

                9565                Tivoli NetView trapd socket

                9567                Tivoli NetView PMD service

                9568                Tivoli NetView General Topology map service

                9569                Tivoli NetView Object Database event socket

                9570                Tivoli NetView Object Collection facility socket

                9571                Tivoli NetView Web Server socket

                9572                Tivoli NetView SnmpServer




60   IBM TotalStorage Productivity Center V2.3: Getting Started
Fabric Manager remote console TCP/IP default ports
The Fabric Manager uses the ports in Table 3-4 for its remote console.

Table 3-4 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric remote console
 Port value                                         Usage

 9560                                               HTTP port 9561 Reserved

 9561                                               Reserved

 9562                                               ASF Jakarta Tomcat’s Local Server port

 9563                                               Tomcat’s warp port

 9564                                               NVDAEMON port

 9565                                               NVREQUESTER port

 9569                                               Tivoli NetView Pager daemon

 9570                                               Tivoli NetView Object Database daemon

 9571                                               Tivoli NetView Topology Manager daemon

 9572                                               Tivoli NetView Topology Manager socket

 9573                                               Tivoli General Topology Manager

 9574                                               Tivoli NetView OVs_PMD request services

 9575                                               Tivoli NetView OVs_PMD management services

 9576                                               Tivoli NetView trapd socket

 9577                                               Tivoli NetView PMD service

 9578                                               Tivoli NetView General Topology map service

 9579                                               Tivoli NetView Object Database event socket

 9580                                               Tivoli NetView Object Collection facility socket

 9581                                               Tivoli NetView Web Server socket

 9582                                               Tivoli NetView SnmpServer


Fabric Agents TCP/IP ports
The Fabric Agents use the TCP/IP ports listed in Table 3-5.

Table 3-5 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric Agents
 Port value                                         Usage

 9510                                               Common agent

 9514                                               Used to restart the agent

 9515                                               Used to restart the agent




                                            Chapter 3. Installation planning and considerations        61
3.2.2 Default databases created during the installation
               During the installation of IBM TotalStorage Productivity Center, we recommend that you use
               DB2 as the preferred database type. Table 3-6 lists all the default databases that the installer
               creates during the installation.

               Table 3-6 Default DB2 databases
                Application                                       Default database name (DB2)

                IBM Director                                      No default; we created database, IBMDIR

                Tivoli Agent Manager                              IBMCDB

                IBM TotalStorage Productivity Center for Disk     DMCOSERV
                and Replication Base

                IBM TotalStorage Productivity Center for Disk     PMDATA

                IBM TotalStorage Productivity Center for          ESSHWL
                Replication hardware subcomponent

                IBM TotalStorage Productivity Center for          ELEMCAT
                Replication element catalog

                IBM TotalStorage Productivity Center for          REPMGR
                Replication replication manager

                IBM TotalStorage Productivity Center for          SVCHWL
                Replication SVC hardware subcomponent

                IBM TotalStorage Productivity Center for Fabric   ITSANM

                IBM TotalStorage Productivity Center for Data     No default; we created Database, TPCDATA



3.3 Our lab setup environment
               This section gives a brief overview of what our lab setup environment looked like and what we
               used to document the installation.

               Server hardware used
               We used four IBM Eserver xSeries servers with:
                  2 x 2.4 GHz CPU per system
                  4 GB Memory per system
                  73 GB HDD per system
                  Windows 2000 with Service Pack 4

               System 1
               The name of our first system was Colorado. The following applications were installed on this
               system:
                  DB2
                  IBM Director
                  WebSphere Application Server
                  WebSphere Application Server update
                  Tivoli Agent Manager
                  IBM TotalStorage Productivity Center for Disk and Replication Base
                  IBM TotalStorage Productivity Center for Disk
                  IBM TotalStorage Productivity Center for Replication


62   IBM TotalStorage Productivity Center V2.3: Getting Started
IBM TotalStorage Productivity Center for Data
   IBM TotalStorage Productivity Center for Fabric

System 2
The name of our second system was Gallium. The following applications were installed on
this server:
   Data Agent

System 3
The name of our third system was PQDISRV. The following applications were installed on this
server:
   DB2
   Application software

Systems used for CIMOM servers
We used four xSeries servers for our Common Information Model Object Manager (CIMOM)
servers. They consisted of:
   2 GHz CPU per system
   2 GB Memory per system
   40 GB HDD per system
   Windows 2000 server with Service Pack 4

CIMOM system 1
Our first CIMOM server was named TPCMAN. On this server, we installed
   ESS CLI
   ESS CIMOM
   LSI Provider (FAStT CIMOM)

CIMOM system 3
Our third CIMOM system was named SVCCON. We installed the following applications on
this server:
   SAN Volume Controller (SVC) Console
   SVC CIMOM

Networking
We used the following switches for networking:
   IBM Ethernet 10/100 24 Port switch
   2109 F16 Fiber switch

Storage devices
We employed the following storage devices:
   IBM TotalStorage Enterprise Storage Server (ESS)
   800 and F20
   DS8000
   DS6000
   DS4000
   IBM SVC

Figure 3-1 on page 64 shows a diagram of our lab setup environment.




                                        Chapter 3. Installation planning and considerations   63
.
                                  ESS                                              Management             SVCCCONN                  MARYLAMB                 TPCMAN
                            XXX.YYY.6.29                                            Console               SVC CIMOM                 ESS CIMOM              FAStT CIMOM
                            XXX.YYY.6.26                                                   W2K                  W2K                       W2K                     W2K
                                               SVC Cluster
                                               XXX.YYY.140.14
                                               XXX.YYY.140.15
                                                                       XXX.YYY.ZZZ.25            XXX.YYY.ZZZ.34             XXX.YYY.ZZZ.35      XXX.YYY.ZZZ.73


                                                                   Ethernet Switch
                                                                     XXX.YYY.ZZZ.10




         Colorado Server                                        Gallium Server
                    W2K                                                 W2K                                   PQDISRV Server                         Faroe Server
                                                                                                                      W2K                                 W2K
 -> IBM TotalStorage Productivity Center for            -> Tivoli Agent Manager
           Disk, and Replication                                                                            -> Application Server                 ->Application Server
 -> IBM TotalStorage Productivity Center for            -> IBM TotalStorage Productivity
                   Fabric                              Center for Data




                       XXX.YYY.ZZZ.49                XXX.YYY.ZZZ.36                                                     XXX.YYY.ZZZ.100         XXX.YYY.ZZZ.69




                                                                                                 2109-F16 Fiber
                                                                                                    Switch
                                                                                                   XXX.YYY.ZZZ.201
                                                                                                                                                          FAStT 700
                                                                                                                                                       XXX.YYY.ZZZ.202
                                                                                                                                                       XXX.YYY.ZZZ.203




Figure 3-1 Lab setup environment



3.4 Pre-installation check list
                                 You need to complete the following tasks in preparation for installing the IBM TotalStorage
                                 Productivity Center. Print the tables in Appendix A, “Worksheets” on page 991, to keep track
                                 of the information you will need during the installation, such as user names, ports, IP
                                 addresses, and locations of servers and managed devices.
                                 1. Determine which elements of the TotalStorage Productivity Center you will install.
                                 2. Uninstall Internet Information Services.
                                 3. Grant the following privileges to the user account that will be used to install the
                                    TotalStorage Productivity Center:
                                         –     Act as part of the operating system
                                         –     Create a token object
                                         –     Increase quotas
                                         –     Replace a process-level token
                                         –     Logon as a service
                                 4. Install and configure Simple Network Management Protocol (SNMP) (Fabric requirement).
                                 5. Identify any firewalls and obtain the required authorization.
                                 6. Obtain the static IP addresses that will be used for the TotalStorage Productivity Center
                                    servers.


64         IBM TotalStorage Productivity Center V2.3: Getting Started
3.5 User IDs and security
                   This section discusses the user IDs that are used during the installation and those that are
                   used to manage and work with TotalStorage Productivity Center. It also explains how you can
                   increase the basic security of the different components.


3.5.1 User IDs
                   This section lists and explains the user IDs used in a IBM TotalStorage Productivity Center
                   environment. For some of the IDs, refer to Table 3-8 for a link to additional information that is
                   available in the manuals.

                   Suite Installer user
                   We recommend that you use the Windows Administrator or a dedicated user for the
                   installation of TotalStorage Productivity Center. That user ID should have the user rights
                   listed in Table 3-7.

                   Table 3-7 Requirements for the Suite Installer user
                    User rights/policy                                   Used for

                    Act as part of the operating system                     DB2
                                                                            Productivity Center for Disk
                                                                            Fabric Manager

                    Create a token object                                   DB2
                                                                            Productivity Center for Disk

                    Increase quotas                                         DB2
                                                                            Productivity Center for Disk

                    Replace a process-level token                           DB2
                                                                            Productivity Center for Disk

                    Log on as a service                                     DB2

                    Debug programs                                          Productivity Center for Disk

                   Table 3-8 shows the user IDs that are used in a TotalStorage Productivity Center
                   environment. It provides information about the Windows group to which the user ID must
                   belong, whether it is a new user ID that is created during the installation, and when the user
                   ID is used.

Table 3-8 User IDs used in a IBM TotalStorage Productivity Center environment
 Element                 User ID            New user      Type      Group or         Usage
                                                                    groups

 Suite Installer         Administrator      No

 DB2                     db2admina          Yes,          Windows                    DB2 management and Windows
                                            will be                                  Service Account
                                            created

 IBM Director            tpcadmina          No            Windows   DirAdmin or      Windows Service Account
 (see also “IBM                                                     DirSuper
 Director” on
 page 67)




                                                                 Chapter 3. Installation planning and considerations   65
Element                 User ID         New user     Type       Group or        Usage
                                                                 groups

 Resource Manager        managerb        No,          Tivoli     N/A, internal   Used during the registration of a
                                         default      Agent      user            Resource Manager to the Agent
                                         user         Manager                    Manager

 Common Agent            AgentMgrb       No           Tivoli     N/A, internal   Used to authenticate agents and
 (see also “Common                                    Agent      user            lock the certificate key files
 Agent” on page 67)                                   Manager

 Common Agent            itcauserb       Yes,         Windows    Windows         Windows Service Account
                                         will be
                                         created

 TotalStorage            tpccimoma       Yes, will    Windows    DirAdmin        This ID is used to accomplish
 Productivity Center                     be                                      connectivity with the managed
 universal user                          created                                 devices. For example, this ID has to
                                                                                 be set up on the CIM Agents.

 Tivoli NetView          c
                                                      Windows                    See “Fabric Manager User IDs” on
                                                                                 page 68

 IBM WebSphere           a
                                                      Windows                    See “Fabric Manager User IDs” on
                                                                                 page 68

 Host Authentication     a
                                                      Windows                    See “Fabric Manager User IDs” on
                                                                                 page 68
     a. This account can have any name you choose.
     b. This account name cannot be changed during the installation.
     c. The DB2 administrator user ID and password are used here. See “Fabric Manager User IDs” on page 68.

                  Granting privileges
                  Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center for
                  Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, and the IBM
                  TotalStorage Productivity Center for Replication.

                  These user rights are governed by the local security policy and are not initially set as the
                  defaults for administrators. They may not be in effect when you log on as the local
                  administrator. If the IBM TotalStorage Productivity Center installation program does not
                  detect the required user rights for the logged on user name, the program can optionally set
                  them. The program can set the local security policy settings to assign these user rights.
                  Alternatively, you can manually set them prior to performing the installation.

                  To manually set these privileges, follow these steps:
                  1. Click Start →Settings →Control Panel.
                  2. Double-click Administrative Tools.
                  3. Double-click Local Security Policy.
                  4. The Local Security Settings window opens. Expand Local Policies. Then double-click
                     User Rights Assignments to see the policies in effect on your system. For each policy
                     added to the user, perform the following steps:
                       a. Highlight the policy to be selected.
                       b. Double-click the policy and look for the user’s name in the Assigned To column of the
                          Local Security Policy Setting window to verify the policy setting. Ensure that the Local
                          Policy Setting and the Effective Policy Setting options are selected.



66     IBM TotalStorage Productivity Center V2.3: Getting Started
c. If the user name does not appear in the list for the policy, you must add the policy to the
      user. Perform the following steps to add the user to the list:
      i. In the Local Security Policy Setting window, click Add.
      ii. In the Select Users or Groups window, under the Name column, highlight the user
          of group.
      iii. Click Add to place the name in the lower window.
      iv. Click OK to add the policy to the user or group.
5. After you set these user rights, either by using the installation program or manually, log off
   the system and then log on again for the user rights to take effect.
6. Restart the installation program to continue with the IBM TotalStorage Productivity Center
   for Disk and Replication Base.

TotalStorage Productivity Center communication user
The communication user account is used for authentication between several different
elements of the environment. For example, if WebSphere Application Server is installed with
the Suite Installer, its Administrator ID is the communication users.

IBM Director
With Version 4.1, you no longer need to create an “internal” user account. All user IDs must
be operating system accounts and members of one of the following groups:
   DirAdmin or DirSuper groups (Windows), diradmin, or dirsuper groups (Linux)
   Administrator or Domain Administrator groups (Windows), root (Linux)

In addition, a host authentication password is used to allow managed hosts and remote
consoles to communicate with IBM Director.

Resource Manager
The user ID and password (default is manager and password) for the Resource Manager is
stored in the AgentManagerconfigAuthorization.xml file on the Agent Manager. Since this is
used only during the initial registration of a new Resource Manager, there is no problem with
changing the values at any time. You can find a detailed procedure on how to change this in
the Installation and Planning Guides of the corresponding manager.

You can have multiple Resource Manager user IDs if you want to separate the administrators
for the different managers, for example for IBM TotalStorage Productivity Center for Data and
IBM TotalStorage Productivity Center for Fabric.

Common Agent
Each time the Common Agent is started, this context and password are used to validate the
registration of the agent with the Tivoli Agent Manager. Furthermore the password is used to
lock the certificate key files (agentTrust.jks).

The default password is changeMe, but you should change the password when you install the
Tivoli Agent Manager. The Tivoli Agent Manager stores this password in the
AgentManager.properties file.

If you start with the defaults, but want to change the password later, all the agents have to be
changed. A procedure to change the password is available in the Installation and Planning
Guides of the corresponding managers (at this time Data or Fabric). Since the password is
used to lock the certificate files, you must also apply this change to Resource Managers.



                                           Chapter 3. Installation planning and considerations   67
The Common Agent user ID AgentMgr is not a user ID, but rather the context in which the
               agent is registered at the Tivoli Agent Manager. There is no need to change this, so we
               recommend that you accept the default.

               TotalStorage Productivity Center universal user
               The account used to accomplish connectivity with managed devices has to be part of the
               DirAdmin (Windows) or diradmin (Linux) group. This user ID communicates with CIMOMs
               during install and post install. It also communicates with WebSphere.

               Fabric Manager User IDs
               During the installation of IBM TotalStorage Productivity Center for Fabric, you can select if
               you want to use individual passwords for such subcomponents as DB2, IBM WebSphere,
               NetView and the Host Authentication. You can also choose to use the DB2 administrator’s
               user ID and password to make the configuration simpler. Figure 4-117 on page 164 shows
               the window where you can choose the options.


3.5.2 Increasing user security
               The goal of increasing security is to have multiple roles available for the various tasks that can
               be performed. Each role is associated with a certain group. The users are only added to
               those groups that they need to be part of to fulfill their work.

               Not all components have the possibility to increase the security. Others methods require
               some degree of knowledge about the specific components to perform the configuration
               successfully.

               IBM TotalStorage Productivity Center for Data
               During the installation of Productivity Center for Data, you can enter the name of a Windows
               group. Every user within this group is allowed to manage Productivity Center for Data. Other
               users may only start the interface and look at it.

               You can add or change the name of that group later by editing the server.config file and
               restarting Productivity Center for Data.

               Productivity Center for Data does not support the following domain login formats for logging
               into its server component:
                  (domain name)/(username)
                  (username)@(domain)

               Because it does not support these formats, you must set up users in a domain account that
               can log into the server. Perform the following steps before you install Productivity Center for
               Data in your environment:
               1. Create a Local Admin Group.
               2. Create a Domain Global Group.
               3. Add the Domain Global Group to the Local Admin Group.

               Productivity Center for Data looks up the SID for the domain user when the login occurs. You
               only need to specify a user name and password.




68   IBM TotalStorage Productivity Center V2.3: Getting Started
3.5.3 Certificates and key files
                 Within a TotalStorage Productivity Center environment, several applications use certificates
                 to ensure security: Productivity Center for Disk, Productivity Center for Replication, and Tivoli
                 Agent Manager.

                 Productivity Center for Disk and Replication certificates
                 The WebSphere Application Server that is part of Productivity Center for Disk and Replication
                 uses certificates for Secure Sockets Layer (SSL) communication. During the installation, key
                 files can be generated as self-signed certificates, but you must enter a password for each file
                 to lock it. The default file names are:
                       MDMServerKeyFile.jks
                       MDServerTrusFile.jks

                 The default directory for the key file on the Agent Manager is C:IBMmdmdmkeys.

                 Tivoli Agent Manager certificates
                 The Agent Manager comes with demonstration certificates that you can use. However, you
                 can also create new certificates during the installation of Agent Manager (see Figure 4-26 on
                 page 104).

                 If you choose to create new files, the password that you enter on the panel, as shown in
                 Figure 4-27 on page 105, as the Agent registration password is used to lock the
                 agentTrust.jks key file. The default directory for that key file on the Agent Manager is
                 C:Program FilesIBMAgentManagercerts.

                 There are more key files in that directory, but during the installation and first steps, the
                 agentTrust.jks file is the most important one. This is only important if you allow the installer to
                 create your keys.


3.5.4 Services and service accounts
                 The managers and components that belong to the TotalStorage Productivity Center are
                 started as Windows Services. Table 3-9 provides an overview of the most important services.
                 To keep it simple, we did not include all the DB2 services in the table.

Table 3-9 Services and service accounts
 Element                Service name                    Service account     Comment

 DB2                                                    db2admin            The account needs to be part of
                                                                            Administrators and DB2ADMNS.

 IBM Director           IBM Director Server             Administrator       You need to modify the account to be part
                                                                            of one of the groups: DirAdmin or DirSuper.

 Agent Manager          IBM WebSphere Application       LocalSystem         You need to set this service to start
                        Server V5 — Tivoli Agent                            automatically, after the installation.
                        Manager

 Common Agent           IBM Tivoli Common Agent —       itcauser
                        C:Program Filestivoliep

 Productivity Center    IBM TotalStorage Productivity   TSRMsrv1
 for Data               Center for Data server

 Productivity Center    IBM WebSphere Application       LocalSystem
 for Fabric             Server V5 — Fabric Manager



                                                               Chapter 3. Installation planning and considerations   69
Element                  Service name                      Service account   Comment

 Tivoli NetView           Tivoli NetView Service            NetView
 Service



3.6 Starting and stopping the managers
                  To start, stop or restart one of the managers or components, you use the Windows control
                  panel. Table 3-10 shows a list of the services.

Table 3-10 Services used for TotalStorage Productivity Center
 Element                           Service name                                                 Service account

 DB2                                                                                            db2admin

 IBM Director                      IBM Director Server                                          Administrator

 Agent Manager                     IBM WebSphere Application Server V5 - Tivoli Agent Manager   LocalSystem

 Common Agent                      IBM Tivoli Common Agent — C:Program Filestivoliep         itcauser

 Productivity Center for Data      IBM TotalStorage Productivity Center for Data Server         TSRMsrv1

 Productivity Center for Fabric    IBM WebSphere Application Server V5 - Fabric Manager         LocalSystem

 Tivoli NetView Service            Tivoli NetView Service                                       NetView




3.7 Windows Management Instrumentation
                  Before beginning the Prerequisite Software installation, the Windows Management
                  Instrumentation service must first be stopped and disabled. To disable the service, follow the
                  steps below.
                  1. Go to Start → Settings → Control Panel → Administrative Tools → Services.
                  2. Scroll down and double-click the Windows Management Instrumentation service (see
                     Figure 3-2 on page 71).




70     IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 3-2 Windows Management Instrumentation service

3. In the Windows Management Instrumentation Properties window, go down to Service
   status and click the Stop button (Figure 3-3). Wait for the service to stop.




Figure 3-3 Stopping Windows Management Instrumentation




                                         Chapter 3. Installation planning and considerations   71
4. After the service is stopped, in the Windows Management Instrumentation Properties
                  window, change the Startup type to Disabled (Figure 3-4) and click OK.




               Figure 3-4 Disabled Windows Management Instrumentation

               5. After disabling the service, it may start again. If so, go back and stop the service again.
                  The service should now be stopped and disabled as shown in Figure 3-5.




               Figure 3-5 Windows Management Instrumentation successfully disabled


                Important: After the Prerequisite Software installation completes. You must enable the
                Windows Management Instrumentation service before installing the suite. To enable the
                service, change the Startup type from Disabled (see Figure 3-4) to Automatic.




72   IBM TotalStorage Productivity Center V2.3: Getting Started
3.8 World Wide Web Publishing
         As with the Windows Management Instrumentation service, the World Wide Web Publishing
         service must also be stopped and disabled before starting the Prerequisite Software Installer.
         To stop the World Wide Web Publishing service, simply follow the same steps in section
         Figure 3.7 on page 70. This service can remain disabled.



3.9 Uninstalling Internet Information Services
         Make sure Internet Information Services (IIS) is not installed on the server. If it is installed,
         uninstall it using the following procedure.
         1.   Click Start → Settings → Control Panel.
         2.   Click Add/Remove Programs.
         3.   In the Add or Remove Programs window, click Add/Remove Windows Components.
         4.   In the Windows Components panel, deselect IIS.



3.10 Installing SNMP
         Before you install the components of the TotalStorage Productivity Center, install and
         configure SNMP.
         1. Click Start → Settings → Control Panel.
         2. Click Add/Remove Programs.
         3. In the Add or Remove Programs window, click Add/Remove Windows Components.
         4. Double-click Management and Monitoring Tools.
         5. In the Windows Components panel, select Simple Network Management Protocol and
            click OK.
         6. Close the panels and accept the installation of the components.
         7. The Windows installation CD or installation files are required. Make sure that the SNMP
            services are configured as explained in these steps:
              a. Right-click My Computer and select Manage.
              b. In the Computer Management window, click Services and Applications.
              c. Double-click Services.
         8. Scroll down to and double-click SNMP Service.
         9. In the SNMP Service Properties window, follow these steps:
         10.Click the Traps tab (see Figure 3-6 on page 74).




                                                     Chapter 3. Installation planning and considerations     73
d. Make sure that the public name is available.




               Figure 3-6 Traps tab in the SNMP Service Properties window

                  e. Click the Security tab (see Figure 3-7).
                  f. Select Accept SNMP packets from any host.
                  g. Click OK.




               Figure 3-7 SNMP Security Properties window

               11.After you set the public community name, restart the SNMP community service.

74   IBM TotalStorage Productivity Center V2.3: Getting Started
3.11 IBM TotalStorage Productivity Center for Fabric
          Prior to installing IBM TotalStorage Productivity Center for Fabric, there are planning
          considerations and prerequisite tasks that you need to complete.


3.11.1 The computer name
          IBM TotalStorage Productivity Center for Fabric requires fully qualified host names for the
          manager, managed hosts, and the remote console. To verify your computer name on
          Windows, follow this procedure.
          1. Right-click the My Computer icon on your desktop and select Properties.
          2. The System Properties window opens.
             a. Click the Network Identification tab. Click Properties.
             b. The Identification Changes panel opens.
                i. Verify that your computer name is entered correctly. This is the name that the
                   computer is identified as in the network.
                ii. Verify that the full computer name is a fully qualified host name. For example,
                    user1.sanjose.ibm.com is a fully qualified host name.
                iii. Click More.
             c. The DNS Suffix and NetBIOS Computer Name panel opens. Verify that the Primary
                DNS suffix field displays a domain name.

           Important: The fully qualified host name must match the HOSTS file name (including
           case-sensitive characters).


3.11.2 Database considerations
          When you install IBM TotalStorage Productivity Center for Fabric, a DB2 database is
          automatically created if you specified the DB2 database. The default database name is
          TSANMDB.

          If you installed IBM TotalStorage Productivity Center for Fabric previously, are using a DB2
          database, and want to save the information in the database before you re-install the
          manager, you must use DB2 commands to back up the database. The default name for the
          IBM TotalStorage Productivity Center for Fabric DB2 database is TSANMDB. The database
          name for Cloudscape is also TSANMDB. You cannot change this database name.

          If you are installing the manager on more than one machine in a Windows domain, the
          managers on different machines may end up sharing the same DB2 database. To avoid this
          situation, you must either use different database names or different DB2 user names when
          installing the manager on different machines.


3.11.3 Windows Terminal Services
          You cannot use the Windows Terminal Services to access a machine that is running the IBM
          TotalStorage Productivity Center for Fabric console (either the manager or remote console
          machine). Any TotalStorage Productivity Center for Fabric dialogs launched from the SAN
          menu in Tivoli NetView appear on the manager or remote console machine only. The dialogs
          do not appear in the Windows Terminal Services session.




                                                    Chapter 3. Installation planning and considerations   75
3.11.4 Tivoli NetView
               IBM TotalStorage Productivity Center for Fabric also installs Tivoli NetView 7.1.3. If you
               already have Tivoli NetView 7.1.1 installed, IBM TotalStorage Productivity Center for Fabric
               upgrades it to version 7.1.3. If you have a Tivoli NetView release earlier than Version 7.1.1,
               IBM TotalStorage Productivity Center for Fabric prompts you to uninstall Tivoli NetView before
               you install this product.

               If you have Tivoli NetView 7.1.3 installed, ensure that the following applications are stopped.
               You can check for Tivoli NetView by opening the Tivoli NetView console icon on your desktop.
                  Web Console
                  Web Console Security
                  MIB Loader
                  MIB Browser
                  Netmon Seed Editor
                  Tivoli Event Console Adapter

                Important: Ensure that the Windows 2000 Terminal Services is not running. Go to the
                Services panel and check for Terminal Services.


               User IDs and password considerations
               TotalStorage Productivity Center for Fabric only supports local user IDs and groups. It does
               not support domain user IDs and groups.

               Cloudscape database
               If you install TotalStorage Productivity Center for Fabric and specify the Cloudscape
               database, you need the following user IDs and passwords:
                  Agent manager name or IP address and password
                  Common agent password to register with the Agent Manager
                  Resource manager user ID and password to register with the Agent Manager
                  WebSphere administrative user ID and password host authentication password
                  Tivoli NetView password only

               DB2 database
               If you install IBM TotalStorage Productivity Center for Fabric and specify the DB2 database,
               you need the following user IDs and passwords:
                  Agent manager name or IP address and password
                  Common agent password to register with the Agent Manager
                  Resource manager user ID and password to register with the Agent Manager
                  DB2 administrator user ID and password
                  DB2 user ID and password
                  WebSphere administrative user ID and password
                  Host authentication password only
                  Tivoli NetView password only

                Note: If you are running Windows 2000, when the IBM TotalStorage Productivity Center
                for Fabric installation program asks for an existing user ID for WebSphere, that user ID
                must act as part of the operating system user.

               WebSphere
               To change the WebSphere user ID and password, follow this procedure:
               1. Open the install_locationappswaspropertiessoap.client.props file.


76   IBM TotalStorage Productivity Center V2.3: Getting Started
2. Modify the following entries:
                – com.ibm.SOAP. login Userid=user_ID (enter a value for user_ID)
                – com.ibm.SOAP. login Password=password (enter a value for password)
           3. Save the file.
           4. Run the following script:
                ChangeWASAdminPass.bat user_ID password install_dir
                Here user_ID is the WebSphere user ID and password is the password. install_dir is the
                directory where the manager is installed and is optional. For example, install_dir is
                c:Program FilesIBMTPCFabricmanagerbinW32-ix86.


3.11.5 Personal firewall
           If you have a software firewall on your system, disable the firewall while installing the Fabric
           Manager. The firewall causes Tivoli NetView installation to fail. You can enable the firewall
           after you install the Fabric Manager.

           Security considerations
           You set up security by using certificates. There are demonstration certificates or you can
           generate new certificates. This option is specified when you installed the Agent Manager. See
           Figure 4-26 on page 104. We recommend that you generate new certificates.

           If you used the demonstration certificates, continue with the installation.

           If you generated new certificates, follow this procedure:
           1. Copy the manager CD image to your computer.
           2. Copy the agentTrust.jks file from the Agent Manager (AgentManager/certs directory) to
              the /certs directory of the manager CD image. This overwrites the existing agentTrust.jks
              file.
           3. You can write a new CD image with the new file or keep this image on your computer and
              point the Suite Installer to the directory when requested.


3.11.6 Changing the HOSTS file
           When you install Service Pack 3 for Windows 2000 on your computers, follow these steps to
           avoid addressing problems with IBM TotalStorage Productivity Center for Fabric. The
           problem is caused by the address resolution protocol, which returns the short name and not
           the fully qualified host name.

           You can avoid this problem by changing the entries in the corresponding host tables on the
           Domain Name System (DNS) server and on the local computer. The fully qualified host name
           must be listed before the short name as shown in Example 3-1. See 3.11.1, “The computer
           name” on page 75, for details about determining the host name.

           To correct this problem, you have to edit the HOSTS file. The HOSTS file is in the
           %SystemRoot%system32driversetc directory.

           Example 3-1 Sample HOSTS file
           #   This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
           #
           #   This file contains the mappings of IP addresses to host names. Each
           #   entry should be kept on an individual line. The IP address should
           #   be placed in the first column followed by the corresponding host name.


                                                      Chapter 3. Installation planning and considerations   77
#   The IP address and the host name should be separated by at least one
               #   space.
               #
               #   Additionally, comments (such as these) may be inserted on individual
               #   lines or following the machine name denoted by a '#' symbol.
               #
               #   For example:
               #
               #        102.54.94.97     rhino.acme.com           # source server
               #         38.25.63.10     x.acme.com               # x client host

               127.0.0.1       localhost
               #
               192.168.123.146 jason.groupa.mycompany.com jason
               192.168.123.146 jason jason.groupa.mycompany.com


                   Note: Host names are case sensitive, which is limitation within WebSphere. Check your
                   host name.



3.12 IBM TotalStorage Productivity Center for Data
               Prior to installing IBM TotalStorage Productivity Center for Data, there are planning
               considerations and prerequisite tasks that you need to complete.


3.12.1 Server recommendations
               The IBM TotalStorage Productivity Center for Data server component acts as a traffic officer
               for directing information and handling requests from the agent and UI components installed
               within an environment. You need to install at least one server within your environment.

               We recommend that you do not manage more than 1000 agents with a single server. If you
               need to install more than 1000 agents, we suggest that you install an additional server for
               those agents to maintain optimal performance.


3.12.2 Supported subsystems and databases
               This section contains the subsystems, file system formats, and databases that the
               TotalStorage Productivity Center for Data supports.

               Storage subsystem support
               Data Manager currently supports the monitoring and reporting of the following storage
               subsystems:
                    Hitachi Data Systems
                    HP StorageWorks
                    IBM FAStT 200, 600, 700, and 900 with an SMI-S 1.0 compliant CIM interface
                    SAN Volume Controller Console Version 1.1.0.2, 1.1.0.9, 1.2.0.5, 1.2.0.6 (1.3.2 Patch
                    available), 1.2.1.x, 1.2.0.6,
                    SAN Volume Controller CIMOM Version 1.1.0.1, 1.2.0.4, 1.2.0.5 (1.3.2 patch available),
                    1.2.0.5, 1.2.1.x
                    ESS ICAT 1.1.0.2, 1.2.0.15, 1.2.0.29, 1.2.x, 1.2.1.40 and later for ESS




78   IBM TotalStorage Productivity Center V2.3: Getting Started
File system support
           Data Manager supports the monitoring and reporting of the following file systems:
              FAT
              FAT32
              NTFS4, NTFS5
              EXT2, EXT3
              AIX_JFS
              HP_HFS
              VXFS
              UFS
              TMPFS
              AIX_OLD
              NW_FAT
              NW_NSS
              NF
              WAFL
              FAKE
              AIX_JFS2
              SANFS
              REISERFS

           Network File System support
           Data Manager currently supports the monitoring and reporting of the following Network File
           Systems (NFS):
              IBM TotalStorage SAN File System 1.0 (Version 1 Release 1), from AIX V5.1 (32-bit) and
              Windows 2000 Server/Advanced Server clients
              IBM TotalStorage SAN File System 2.1, 2.2 from AIX V5.1 (32-bit), Windows 2000
              Server/Advanced Server, Red Hat Enterprise Linux 3.0 Advanced Server, and SUN
              Solaris 9 clients
              General Parallel File System (GPFS) v2.1, v2.2

           RDBMS support
           Data Manager currently supports the monitoring of the following relational database
           management systems (RDBMS):
              Microsoft SQL Server 7.0, 2000
              Oracle 8i, 9i, 9i V2, 10G
              Sybase SQL Server 11.0.9 and higher
              DB2 Universal Database™ (UDB) 7.1, 7.2, 8.1, 8.2 (64-bit UDB DB2 instances are
              supported)


3.12.3 Security considerations
           This section describes the security issues that you must consider when installing Data
           Manager.




                                                    Chapter 3. Installation planning and considerations   79
User levels
               There are two levels of users within IBM TotalStorage Productivity Center for Data:
               non-administrator users and administrator users. The level of users determine how they use
               IBM TotalStorage Productivity Center for Data.
                  Non-administrator users
                  – View the data collected by IBM TotalStorage Productivity Center for Data.
                  – Create, generate, and save reports.
                  IBM TotalStorage Productivity Center for Data administrators. These users can:
                  – Create, modify, and schedule Pings, Probes, and Scans
                  – Create, generate, and save reports
                  – Perform administrative tasks and customize the IBM TotalStorage Productivity Center
                    for Data environment
                  – Create Groups, Profiles, Quotas, and Constraints
                  – Set alerts

                Important: Security is set up by using the certificates. You can use the demonstration
                certificates or you can generate new certificates. It is recommended that you generate new
                certificates when you install the Agent Manager.


               Certificates
               If you generated new certificates, follow this procedure:
               1. Copy the CD image to your computer.
               2. Copy the agentTrust.jks file from the Agent Manager directory AgentManager/certs to the
                  CommonAgentcerts directory of the manager CD image. This overwrites the existing
                  agentTrust.jks file.

               You can write a new CD image with the new file or keep this image on your computer and
               point the Suite Installer to the directory when requested.

                Important: Before installing IBM TotalStorage Productivity Center for Data, define the
                group within your environment that will have administrator rights within Data Manager. This
                group must exist on the same machine where you are installing the Server component.
                During the installation, you are prompted to enter the name of this group.




80   IBM TotalStorage Productivity Center V2.3: Getting Started
3.12.4 Creating the DB2 database
           Before you install the component, create the IBM TotalStorage Productivity Center for Data
           database.
           1. From the start menu, select Start →Programs →IBM DB2 →General Administration
              Tools →Control Center.
           2. This launches the DB2 Control Center. Create a database that is used for IBM
              TotalStorage Productivity Center for Data as shown in Figure 3-8. Select All Databases,
              right-click and select Create Databases →Standard.




           Figure 3-8 DB2 database creation




                                                    Chapter 3. Installation planning and considerations   81
3. In the window that opens (Figure 3-9), complete the required database name information.
                  We used the database name of TPCDATA. Click Finish to complete the database
                  creation.




               Figure 3-9 DB2 database information for creation




82   IBM TotalStorage Productivity Center V2.3: Getting Started
4


    Chapter 4.   Installing the IBM TotalStorage
                 Productivity Center suite
                 Installation of the TotalStorage Productivity Center suite of products is done using the install
                 wizards. The first, the Prerequisite Software Installer, installs all the products needed before
                 one can install the TotalStorage Productivity Center suite. The second, the Suite Installer,
                 installs the individual components or the entire suite of products.

                 This chapter documents the use of the Prerequisite Software Installer and the Suite Installer.
                 It also includes hints and tips based on our experience.




© Copyright IBM Corp. 2005. All rights reserved.                                                               83
4.1 Installing the IBM TotalStorage Productivity Center
               IBM TotalStorage Productivity Center provides a Prerequisite Software Installer and Suite
               Installer that helps guide you through the installation process. You can also use the Suite
               Installer to install stand-alone components.

               The Prerequisite Software Installer installs the following products in this order:
               1. DB2, which is required by all the managers
               2. WebSphere Application Server, which is required by all the managers except for
                  TotalStorage Productivity Center for Data
               3. Tivoli Agent Manager, which is required by Fabric Manager and Data Manager

               The Suite Installer installs the following products or components in this order:
               1. IBM Director, which is required by TotalStorage Productivity Center for Disk and
                  TotalStorage Productivity Center for Replication
               2. Productivity Center for Disk and Replication Base, which is required by TotalStorage
                  Productivity Center for Disk and TotalStorage Productivity Center for Replication
               3. TotalStorage Productivity Center for Disk
               4. TotalStorage Productivity Center for Replication
               5. TotalStorage Productivity Center for Fabric - Manager
               6. TotalStorage Productivity Center for Data - Manager

               In addition to the manager installations, the Suite Installer guides you through the installation
               of other IBM TotalStorage Productivity Center components. You can select more than one
               installation option at a time. This redbook separates the types of installations into several
               sections to help explain them. The additional types of installation tasks are:
                  IBM TotalStorage Productivity Center Agent installations
                  IBM TotalStorage Productivity Center GUI/Client installations
                  Language Pack installations
                  IBM TotalStorage Productivity Center product uninstallations


4.1.1 Considerations
               You may want to use IBM TotalStorage Productivity Center for Disk to manage the IBM
               TotalStorage Enterprise Storage Server (ESS), DS8000, DS6000, Storage Area Network
               (SAN) Volume Controller (SVC), IBM TotalStorage Fibre Array Storage Technology (FAStT),
               or DS4000 storage subsystems. In this case, you must install the prerequisite input/output
               (I/O) Subsystem Licensed Internal Code (SLIC) and Common Information Model (CIM) Agent
               for the devices. See Chapter 6, “Configuring IBM TotalStorage Productivity Center for Disk”
               on page 247, for more information.

               If you are installing the CIM Agent for the ESS, or the DS8000 or DS6000 you must install it
               on a separate machine.

               TotalStorage Productivity Center 2.3 does not support Linux on zSeries or on S/390®. Nor
               does IBM TotalStorage Productivity Center support Windows domains.




84   IBM TotalStorage Productivity Center V2.3: Getting Started
4.2 Prerequisite Software Installation
           This section guides you step by step through the install process of the prerequisite software
           components.


4.2.1 Best practices
           Before you begin installing the prerequisite software components, we recommend that you
           complete the following tasks:
           1. Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center
              components, including the IBM TotalStorage Productivity Center for Disk and Replication
              Base, IBM TotalStorage Productivity Center for Disk, IBM TotalStorage Productivity
              Center for Replication, IBM TotalStorage Productivity Center for Data and IBM
              TotalStorage Productivity Center for Fabric. For details refer to “Granting privileges” on
              page 66.
           2. Make sure Internet Information Services (IIS) is not installed on the server. If it is installed,
              uninstall it using the procedure in 3.9, “Uninstalling Internet Information Services” on
              page 73.
           3. Install and configure Simple Network Management Protocol (SNMP) described in 3.10,
              “Installing SNMP” on page 73.
           4. Stop and disable Windows Management Instrumentation (Figure 3.7 on page 70) and
              World Wide Web Publishing (3.8, “World Wide Web Publishing” on page 73) services.
           5. Create a database for Agent Manager installation. To create the database, see 3.12.4,
              “Creating the DB2 database” on page 81. The default database name for Agent Manager
              is IBMCDB.


4.2.2 Installing prerequisite software
           Follow these steps to install the prerequisite software components:
           1. Insert the IBM TotalStorage Productivity Center Prerequisite Software Installer CD into the
              CD-ROM drive.
              If Windows autorun is enabled, the installation program should start automatically. If it
              does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center
              CD-ROM drive. Double-click setup.exe.

                Note: It may take a few moments for the installer program to initialize. Be patient.
                Eventually, you see the language selection panel (Figure 4-1).

           2. The installer language window (Figure 4-1) opens. From the list, select a language. This is
              the language that is used to install this product. Click OK.




           Figure 4-1 Prerequisite Software Installer language



                                        Chapter 4. Installing the IBM TotalStorage Productivity Center suite   85
3. The Prerequisite Software Installer wizard welcome pane in Figure 4-2 opens. Click Next.
                  The Software License Agreement panel is then displayed. Read the terms of the license
                  agreement. If you agree with the terms of the license agreement select the I accept the
                  terms in the license agreement radio button and click Next to continue.




               Figure 4-2 Prerequisite Software Installer wizard

               4. The prerequisite operating system check panel in Figure 4-3 on page 87 opens. When it
                  completes successfully click Next.




86   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 4-3 Prerequisite Operating System check

5. The Tivoli Common Directory location panel (Figure 4-4) opens and prompts for a location
   for the log files. Accept the default location or enter a different location. Click Next to
   continue.




Figure 4-4 Tivoli Common Directory location




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   87
6. The product selection panel (Figure 4-5) opens. To install the entire TotalStorage
                  Productivity Center suite, check the boxes next to DB2, WebSphere, and Agent Manager.




               Figure 4-5 Product selection

               7. The DB2 Universal Database panel (Figure 4-6) opens. Select Enterprise Server Edition
                  and click Next to continue.




               Figure 4-6 DB2 Universal Database


88   IBM TotalStorage Productivity Center V2.3: Getting Started
Note: After clicking Next (Figure 4-6), if you see the panel in Figure 4-7, you must first stop
 and disable the Windows Management Instrumentation service before continuing with the
 installation. See Figure 3.7 on page 70 for detailed instructions.




Figure 4-7 Windows Management Instrumentation service warning




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   89
8. The DB2 user name and password panel (Figure 4-8) opens. If the DB2 user name exists
                  on the system, the correct password must be entered or the DB2 installation will fail. If the
                  DB2 user name does not exist it will be created by the DB2 install. In our installation we
                  accepted the default user name and entered a unique password. Click Next to continue.




               Figure 4-8 DB2 user configuration




90   IBM TotalStorage Productivity Center V2.3: Getting Started
9. The Target Directory Confirmation panel (Figure 4-9) opens. Accept the default target
   directories for DB2 installation or enter a different location. Click Next.




Figure 4-9 Target Directory Confirmation

10.The select the languages panel (Figure 4-10) opens. This installs the languages selected
   for DB2. Select your desired language(s). Click Next.




Figure 4-10 Language selection


                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   91
11.The Preview Prerequisite Software Information panel (Figure 4-11) opens. Review the
                  information and click Next.




               Figure 4-11 Preview Prerequisite Software Information

               12.The WebSphere Application Server system prerequisites check panel (Figure 4-12)
                  opens. When the check completes successfully click Next.




               Figure 4-12 WebSphere Application Server system prerequisites check


92   IBM TotalStorage Productivity Center V2.3: Getting Started
13.The installation options panel (Figure 4-13) opens. Select the type of installation you wish
   to perform. The rest of this section guides you through Unattended Installation.
   Unattended Installation guides you through copying all installation images to a central
   location called the installation image depot. Once the copies are completed, the
   component installations proceed with no further intervention needed. Attended Installation
   prompts you to enter the location of each install image as needed.
   Click Next to continue.




Figure 4-13 Installation options




                             Chapter 4. Installing the IBM TotalStorage Productivity Center suite   93
14.The install image depot location panel opens (see Figure 4-14). Enter the location where
                  all installation images are to be copied. Click Next.




               Figure 4-14 Install image depot location

               15.You are first prompted for the location of the DB2 installation image (see Figure 4-15).
                  Browse to the installation image and select the path to the installation files or insert the
                  install CD and click Copy.




               Figure 4-15 DB2 installation source




94   IBM TotalStorage Productivity Center V2.3: Getting Started
16.After the DB2 installation image is copied to the install image depot, you are prompted for
   the location of the WebSphere installation image (see Figure 4-16). Browse to the
   installation image and select the path to the installation files or insert the install CD and
   click Copy.




Figure 4-16 WebSphere installation source

17.After the WebSphere installation image is copied, you are prompted for the location of the
   WebSphere Cumulative fix 3 installation image (see Figure 4-17). Browse to the
   installation image and select the path to the installation files or insert the install CD and
   click Copy.




Figure 4-17 WebSphere fix 3 installation source




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   95
18.When an install image has been successfully copied to the Install Image Depot, a green
                  check mark appears to the right of the prerequisite. After all the prerequisite software
                  images are successfully copied to the install image depot (Figure 4-18), click Next.




               Figure 4-18 Installation images copied successfully




96   IBM TotalStorage Productivity Center V2.3: Getting Started
19.The installation of DB2, WebSphere, and the WebSphere Fix Pack begins. When a
   prerequisite is successfully installed, a green check mark appears to its left. If the
   installation of a prerequisite fails, a red X appears to the left. If a prerequisite installation
   fails, exit the installer, check the logs to determine and correct the problem, and restart the
   installer.
   When the installation completes successfully (see Figure 4-19), click Next.




Figure 4-19 DB2 and WebSphere installation complete




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   97
20.The Agent Manager Registry Information panel opens. Select the type of database,
                  specify the database name, and choose a local or remote database. The default DB2
                  database name is IBMCDB. For a local database connection, the DB2 database will be
                  created if it does not exist. We recommend that you take the default database name for a
                  local database. Click Next to continue (see Figure 4-20).

                Attention: For a remote database connection, the database specified below must exist.
                Refer to 3.12.4, “Creating the DB2 database” on page 81 for information on how to create a
                database in DB2.




               Figure 4-20 Agent Manager Registry Information




98   IBM TotalStorage Productivity Center V2.3: Getting Started
21.The Database Connection Information panel in Figure 4-21 opens. Specify the location of
   the database software directory (for DB2, the default install location is C:Program
   FilesIBMSQLLIB), the database user name and password. You must specify the
   database host name and port if you are using a remote database. Click Next to continue.




Figure 4-21 Agent Manager database connection Information




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   99
Note: For a remote database connection the database specified in Figure 4-20 on page 98
                must exist. If the database does not exist, you will see the error message shown in
                Figure 4-22. Refer to 3.12.4, “Creating the DB2 database” on page 81 for information on
                how to create a database in DB2.




              Figure 4-22 DB2 database error




100   IBM TotalStorage Productivity Center V2.3: Getting Started
22.A panel opens prompting for a location to install Tivoli Agent Manager (see Figure 4-23).
   Accept the default location or enter a different location. Click Next to continue.




Figure 4-23 Tivoli Agent Manager installation directory




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   101
23.The WebSphere Application Server Information panel (Figure 4-24) opens. This panel lets
                 you specify the host name or IP address, and the cell and node names on which to install
                 the Agent Manager. If you specify a host name, use the fully qualified host name. For
                 example, specify HELIUM.almaden.ibm.com. If you use the IP address, use a static IP
                 address. This value is used in the URLs for all Agent Manager services. We recommend
                 that you use the fully qualified host name, not the IP address of the Agent Manager server.
                  Typically the cell and node name are both the same as the host name of the computer. If
                  WebSphere was installed before you started the Agent Manager installation wizard, you
                  can look up the cell and node name values in the %WebSphere Application
                  Server_INSTALL_ROOT%binSetupCmdLine.bat file.
                  You can also specify the ports used by the Agent Manager. We recommend that you
                  accept the defaults.
                  – Registration Port: The default is 9511 for the server-side Secure Sockets Layer (SSL).
                  – Secure Port: The default is 9512 for client authentication, two-way SSL.
                  – Public Port: The default is 9513.
                  If you are using WebSphere network deployment or a customized deployment, make sure
                  that the cell and node names are correct. For more information about WebSphere
                  deployment, see your WebSphere documentation.
                  After filling in the required information in the WebSphere Application Server Information
                  panel, click Next.




              Figure 4-24 WebSphere Application Server Information




102   IBM TotalStorage Productivity Center V2.3: Getting Started
Note: If an IP address is entered in the WebSphere Application Server Information panel
 shown in Figure 4-24, the next panel (see Figure 4-25) explains why a host name is
 recommended. Click Back to use a host name or click Next to use the IP address.




Figure 4-25 Agent Manager IP address warning




                         Chapter 4. Installing the IBM TotalStorage Productivity Center suite   103
24.The Security Certificates panel (Figure 4-26) opens. Specify whether to create new
                 certificates or to use the demonstration certificates.
                  In a typical production environment, you would create new certificates. The ability to use
                  demonstration certificates is provided as a convenience for testing and demonstration
                  purposes. Make a selection and click Next.




              Figure 4-26 Security Certificates




104   IBM TotalStorage Productivity Center V2.3: Getting Started
25.The Security Certificate Settings panel (see Figure 4-27) opens. Specify the certificate
   authority name, security domain, and agent registration password.
   The agent registration password is used to register the agents. You must provide this
   password when you install the agents. This password also sets the Agent Manager key
   store and trust store files. Record this password, it will be used again in the installation
   process.
   The domain name is used in the right-hand portion of the distinguished name (DN) of
   every certificate issued by the Agent Manager. It is the name of the security domain
   defined by the Agent Manager. Typically, this value is the registered domain name or
   contains the registered domain name. For example, for the computer system
   myserver.ibm.com, the domain name is ibm.com. This value must be unique in your
   environment. If you have multiple Agent Managers installed, this value must be different on
   each Agent Manager.
   The default agent registration password is changeMe and it is case sensitive.
   Click Next to continue.




Figure 4-27 Security Certificate Settings




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   105
26.The User input summary panel for Agent Manager (see Figure 4-28) opens. Review the
                 information and click Next.




              Figure 4-28 User input summary




106   IBM TotalStorage Productivity Center V2.3: Getting Started
27.The summary information for Agent Manager panel (see Figure 4-29) opens. Click Next.




Figure 4-29 Agent Manager installation summary




                         Chapter 4. Installing the IBM TotalStorage Productivity Center suite   107
28.You will see a panel indicating the status of the Agent Manager install process. the
                 IBMCDB database will be created and tables are added to the database. Once the
                 installation of agent manager completes the Summary of Installation and Configuration
                 Results panel (see Figure 4-30) opens. Click Next to continue.




              Figure 4-30 Summary of Installation and Configuration Results




108   IBM TotalStorage Productivity Center V2.3: Getting Started
29.The next panel (Figure 4-31) informs you when the Agent Manager service started
   successfully. Click Finish.




Figure 4-31 Agent Manager service started




                          Chapter 4. Installing the IBM TotalStorage Productivity Center suite   109
30.The next panel (Figure 4-32) indicates the installation of prerequisite software is complete.
                 Click Finish to exit the prerequisite installer.




              Figure 4-32 Prerequisite software installation complete



4.3 Suite installation
              This section guides you through the step by step process to install the TotalStorage
              Productivity Center components you select.

              The Suite Installer launches the installation wizard for each manager you chose to install.


4.3.1 Best practices
              Before you begin installing the suite of products complete the following tasks.
              1. If you are running the Fabric Manager installation under Windows 2000, the Fabric
                 Manager installation requires the user ID to have the following user rights:
                  Act as part of the operating system
                  Log on as a service user rights
                  see Granting privileges under 3.5.1, “User IDs” on page 65
              2. Enable Windows Management Instrumentation (see Figure 3.7 on page 70)
              3. Install SNMP (see 3.10, “Installing SNMP” on page 73)
              4. Create the database for the TotalStorage Productivity Center for Data installation (see
                 3.12.4, “Creating the DB2 database” on page 81).


4.3.2 Installing the TotalStorage Productivity Center suite
              Follow these steps for successful installation:

110   IBM TotalStorage Productivity Center V2.3: Getting Started
1. Insert the IBM TotalStorage Productivity Center Suite Installer CD into the CD-ROM drive.
   If Windows autorun is enabled, the installation program should start automatically. If it
   does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center
   CD-ROM drive. Double-click setup.exe.

    Note: It may take a few moments for the installer program to initialize. Be patient.
    Eventually, you see the language selection panel (Figure 4-33).

2. The Installer language window (see Figure 4-33) opens. From the list, select a language.
   This is the language used to install this product. Click OK.




Figure 4-33 Installer Wizard

3. You see the Welcome to the InstallShield Wizard for The IBM TotalStorage Productivity
   Center panel (see Figure 4-34). Click Next.




Figure 4-34 Welcome to IBM TotalStorage Productivity Center panel

4. The Software License Agreement panel (Figure 4-35 on page 112) opens. Read the terms
   of the license agreement. If you agree with the terms of the license agreement, select the
   I accept the terms of the license agreement radio button. Then click Next.
   If you do not accept the terms of the license agreement, the installation program ends
   without installing IBM TotalStorage Productivity Center components.




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   111
Figure 4-35 License agreement

              5. The next panel enables you to select the type of installation (Figure 4-36). Select Manager
                 installations of Data, Disk, Fabric, and Replication and then click Next.




              Figure 4-36 IBM TotalStorage Productivity Center options panel




112   IBM TotalStorage Productivity Center V2.3: Getting Started
6. In the next panel (see Figure 4-37), select the components that you want to install. Click
   Next to continue.




Figure 4-37 IBM TotalStorage Productivity Center components




                          Chapter 4. Installing the IBM TotalStorage Productivity Center suite   113
7. The suite installer installs the IBM Director first (see Figure 4-38). Click Next.




              Figure 4-38 IBM Director prerequisite install

              8. The IBM Director installation is now ready to begin (see Figure 4-39). Click Next.




              Figure 4-39 Begin IBM Director installation




114   IBM TotalStorage Productivity Center V2.3: Getting Started
9. The package location for IBM Director panel (see Figure 4-40) opens. Enter the
   appropriate information and click Next.

    Note: Make sure the Windows Management Instrumentation service is disabled (see
    Figure 3.7 on page 70 for detailed instructions). If it is enabled, a window appears
    prompting you to disable the service after you click Next to continue.




Figure 4-40 IBM Director package location

10.The next panel (see Figure 4-41) provides information about the IBM Director post
   installation reboot option. When prompted, choose the option to reboot later. Click Next.




Figure 4-41 IBM Director information


                          Chapter 4. Installing the IBM TotalStorage Productivity Center suite   115
11.The IBM Director Server - InstallShield Wizard panel (Figure 4-42) opens. It indicates that
                 the IBM Director installation wizard will launch. Click Next.




              Figure 4-42 IBM Director InstallShield Wizard

              12.The License Agreement window opens (Figure 4-43). Read the license agreement. Click I
                 accept the terms in the license agreement radio button and then click Next.




              Figure 4-43 IBM Director license agreement




116   IBM TotalStorage Productivity Center V2.3: Getting Started
13.The next window (Figure 4-44) displays an advertisement for Enhance IBM Director with
   the new Server Plus Pack window. Click Next.




Figure 4-44 IBM Director new Server Plus Pack window

14.The Feature and installation directory window (Figure 4-45) opens. Accept the default
   settings and click Next.




Figure 4-45 IBM Director feature and installation directory window




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   117
15.The IBM Director service account information window (see Figure 4-46) opens.
                  a. Type the domain for the IBM Director system administrator. Alternatively, if there is no
                     domain, then type the local host name (the recommended setup).
                  b. Type a user name and password for IBM Director. The IBM Director will run under this
                     user name and you will log on to the IBM Director console using this user name. In our
                     installation we used the user ID we created to install the TotalStorage Productivity
                     Center. This user must be part of the Administrator group.
                  c. Click Next to continue.




              Figure 4-46 Account information

              16.The Encryption settings window (Figure 4-47) opens. Accept the default settings in the
                 Encryption settings window. Click Next.




              Figure 4-47 Encryption settings


118   IBM TotalStorage Productivity Center V2.3: Getting Started
17.In the Software Distribution settings window (Figure 4-48), accept the default values and
   click Next.

     Note: The TotalStorage Productivity Center components do not use the
     software-distribution packages function of IBM Director.




Figure 4-48 Installation target directory

18.The Ready to Install the Program window (Figure 4-49) opens. Click Install.




Figure 4-49 Installation ready




                             Chapter 4. Installing the IBM TotalStorage Productivity Center suite   119
19.The Installing IBM Director server window (Figure 4-50) reports the status of the
                 installation.




              Figure 4-50 Installation progress

              20.The Network driver configuration window (Figure 4-51) opens. Accept the default settings
                 and click OK.




              Figure 4-51 Network driver configuration

                  The secondary window closes and the installation wizard performs additional actions
                  which are tracked in the status window.




120   IBM TotalStorage Productivity Center V2.3: Getting Started
21.The Select the database to be configured window (Figure 4-52) opens. Select IBM DB2
   Universal Database and click Next.




Figure 4-52 Database selection

22.The IBM Director DB2 Universal Database configuration window (Figure 4-53) opens. It
   may be behind the status window. You must click this window to bring it to the foreground.
   a. In the Database name field, type a new database name for the IBM Director database
      table or type an existing database name.
   b. In the User ID and Password fields, type the DB2 user ID and password that you
      created during the DB2 installation.
   c. Click Next to continue.




Figure 4-53 Database selection configuration




                          Chapter 4. Installing the IBM TotalStorage Productivity Center suite   121
23.In the IBM Director DB2 Universal Database configuration secondary window
                 (Figure 4-54), accept the default DB2 node name LOCAL - DB2. Click OK.




              Figure 4-54 Database node name selection

              24.The Database configuration in progress window is displayed at the bottom of the IBM
                 Director DB2 Universal Database configuration window. Wait for the configuration to
                 complete and the secondary window to close.
              25.When the InstallShield Wizard Completed window (Figure 4-55) opens, click Finish.




              Figure 4-55 Completed installation


                   Important: Do not reboot the machine at the end of the IBM Director installation. The
                   Suite Installer reboots the machine.




122   IBM TotalStorage Productivity Center V2.3: Getting Started
26.When you see IBM Director Server Installer Information window (Figure 4-56), click No.




Figure 4-56 IBM Director reboot option


    Important: Are you installing IBM TotalStorage Productivity Center for Data? If so,
    have you created the database for IBM TotalStorage Productivity Center for Data or are
    you using a existing database?

    If you are installing Tivoli Disk manager, you must have created the administrative
    superuser ID and group and set the privileges.

27.The Install Status panel (see Figure 4-57) opens after a successful installation. Click Next.




Figure 4-57 IBM Director Install Status successful




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   123
28.In the machine reboot window (see Figure 4-58), click Next to reboot the machine.

                   Important: If the server does not reboot at this point, cancel the installer and reboot the
                   server.




              Figure 4-58 Install wizard completion




124   IBM TotalStorage Productivity Center V2.3: Getting Started
4.3.3 IBM TotalStorage Productivity Center for Disk and Replication Base
           There are three separate installations to perform:
              Install the IBM TotalStorage Productivity Center for Disk and Replication Base code.
              Install the IBM TotalStorage Productivity Center for Disk.
              Install the IBM TotalStorage Productivity Center for Replication.

           IBM TotalStorage Productivity Center for Disk and Replication Base must be installed by a
           user who is logged on as a local administrator (for example, as the administrator user) on the
           system where the IBM TotalStorage Productivity Center for Disk and Replication Base will be
           installed. If you intend to install IBM TotalStorage Productivity Center for Disk and Replication
           Base as a server, you need the following required system privileges, called user rights, to
           successfully complete the installation as described in 3.5.1, “User IDs” on page 65.
              Act as part of the operating system
              Create a token object
              Increase quotas
              Replace a process level token
              Debug programs

           After rebooting the machine the installer initializes to continue the suite install. A window
           opens prompting you to select the installation language to be used for this wizard
           (Figure 4-59). Select the language and click OK.




           Figure 4-59 Selecting the language for the IBM TotalStorage Productivity Center installation wizard




                                       Chapter 4. Installing the IBM TotalStorage Productivity Center suite   125
1. The next panel enables you to select the type of installation (Figure 4-60). Select Manager
                 installations of Data, Disk, Fabric, and Replication and click Next.




              Figure 4-60 IBM TotalStorage Productivity Center options panel

              2. The next window (Figure 4-61) opens allowing you to select which components to install.
                 Select the components you wish to install (all components in this case) and click Next.




              Figure 4-61 TotalStorage Productivity Center components




126   IBM TotalStorage Productivity Center V2.3: Getting Started
3. The installer checks that all prerequisite software is installed on your system (see
   Figure 4-62). Click Next.




Figure 4-62 Prerequisite software check

4. Figure 4-63 shows the Installer window about to begin installation of Productivity Center
   for Disk and Replication Base. The window also displays the products that are yet to be
   installed. Click Next to begin the installation.




Figure 4-63 IBM TotalStorage Productivity Center installation information




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   127
5. The Package Location for Disk and Replication Manager window (Figure 4-64) opens.
                 Enter the appropriate information and click Next.




              Figure 4-64 Package location for Productivity Center Disk and Replication

              6. The Information for Disk and Replication Base Manager panel (see Figure 4-65) opens.
                 Click Next.




              Figure 4-65 Installer information




128   IBM TotalStorage Productivity Center V2.3: Getting Started
7. The Welcome panel (see Figure 4-66) opens. It indicates that the Disk and Replication
   Base Manager installation wizard will be launched. Click Next.




Figure 4-66 IBM TotalStorage Productivity Center for Disk and Replication Base welcome information




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   129
8. In the Destination Directory panel (Figure 4-67), you confirm the target directories. Enter
                 the directory path or accept the default directory and click Next.




              Figure 4-67 IBM TotalStorage Productivity Center for Disk and Replication Base Installation directory

              9. In the IBM WebSphere Instance Selection panel (see Figure 4-68), click Next.




              Figure 4-68 WebSphere Application Server information



130   IBM TotalStorage Productivity Center V2.3: Getting Started
10.If the installation user ID privileges were not set, you see an information panel stating that
   you need to set the privileges (see Figure 4-69). Click Yes.




Figure 4-69 Verifying the effective privileges

11.The required user privileges are set and an informational window opens (see Figure 4-70).
   Click OK.




Figure 4-70 Message indicating the enablement of the required privileges

12.At this point, the installation terminates. You must close the installer. Log off of Windows,
   log back on again, and then restart the installer.




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   131
13.In the Installation Type panel (Figure 4-71), select Typical and click Next.




              Figure 4-71 IBM TotalStorage Productivity Center for Disk and Replication Base type of installation

              14.If the IBM Director Support Program and IBM Director Server service is still running, the
                 Servers Check panel (see Figure 4-72) opens and prompts you to stop the services. Click
                 Next to stop the services.




              Figure 4-72 Server checks


132   IBM TotalStorage Productivity Center V2.3: Getting Started
15.In the User Name Input 1 of 2 panel (Figure 4-73), enter the name and password for the
   IBM TotalStorage Productivity Center for Disk and Replication Base super user ID. This
   user name must be defined to the operating system. In our environment we used
   tpccimom as our super user. After entering the required information click Next to continue.




Figure 4-73 IBM TotalStorage Productivity Center for Disk and Replication Base superuser information

16.If the specified super user ID is not defined to the operating system a window asking if you
   would like to create it appears (see Figure 4-74). Click Yes to continue.




Figure 4-74 Create new local user account




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   133
17.In the User Name Input 2 of 2 panel (Figure 4-75), enter the user name and password for
                 the IBM DB2 Universal Database Server. This is the user ID that was specified when DB2
                 was installed (see Figure 4-8 on page 90). Click Next to continue.




              Figure 4-75 IBM TotalStorage Productivity Center for Disk and Replication Base DB2 user information




134   IBM TotalStorage Productivity Center V2.3: Getting Started
18.The SSL Configuration panel (Figure 4-76) opens. If you selected IBM TotalStorage
   Productivity Center for Disk and Replication Base Server, then you must enter the fully
   qualified name of the two server key files that were generated previously or that must be
   generated during or after the IBM TotalStorage Productivity Center for Disk and
   Replication Base installation. The information that you enter will be used later.
   a. Choose either of the following options:
       •   Generate a self-signed certificate: Select this option if you want the installer to
           automatically generate these certificate files. We generate the certificates in our
           installation.
       •   Defer the generation of the certificate as a manual post-installation task:
           Select this option if you want to manually generate these certificate files after the
           installation, using WebSphere Application Server ikeyman utility.
   b. Enter the Key file and Trust file passwords. The passwords must be a minimum of six
      characters in length and cannot contain spaces. You should record the passwords in
      the worksheets provided in Appendix A, “Worksheets” on page 991.
   c. Click Next.




Figure 4-76 Key and Trust file options




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   135
The Generate Self-Signed Certificate window opens (see Figure 4-77). Complete all the
                  required fields and click Next to continue.




              Figure 4-77 IBM TotalStorage Productivity Center for Disk and Replication Base Certificate information




136   IBM TotalStorage Productivity Center V2.3: Getting Started
19.Next you see the Create Local Database window (Figure 4-78). Accept the default
   database name of DMCOSERV, or optionally enter the database name. Click Next to
   continue.

     Note: The database name must be unique to IBM TotalStorage Productivity Center for
     Disk and Replication Base. You cannot share the IBM TotalStorage Productivity Center
     for Disk and Replication Base database with any other applications.




Figure 4-78 IBM TotalStorage Productivity Center for Disk and Replication Base database name




                          Chapter 4. Installing the IBM TotalStorage Productivity Center suite   137
20.The Preview window (Figure 4-79) displays a summary of all of the choices that you made
                 during the customizing phase of the installation. Click Install to complete the installation.




              Figure 4-79 IBM TotalStorage Productivity Center for Disk and Replication Base Installer information




138   IBM TotalStorage Productivity Center V2.3: Getting Started
21.The DB2 database is created, the keys are generated, and the Productivity Center for Disk
   and Replication base is installed. The Finish window opens. You can view the log file for
   any possible error messages. The log file is located in installeddirectorylogsdmlog.txt.
   The dmlog.txt file contains a trace of the installation actions. Click Finish to complete the
   installation.




Figure 4-80 Productivity Center for Disk and Replication Base Installer - Finish

Notepad opens and displays the post-installation tasks information. Read the information and
complete any required tasks.
22.The Install Status window (Figure 4-81) opens after the successful Productivity Center for
   Disk and Replication Base installation. Click Next.




Figure 4-81 Install Status for Productivity Center for Disk and Replication Base successful




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   139
4.3.4 IBM TotalStorage Productivity Center for Disk
              The next product to install is the Productivity Center for Disk as indicated in Figure 4-82. Click
              Next to begin the installation.




              Figure 4-82 IBM TotalStorage Productivity Center installer information

              1. A window (Figure 4-83) opens that prompts you for the package location for CD-ROM
                 labeled IBM TotalStorage Productivity Center for Disk. Enter the appropriate information
                 and click Next.




              Figure 4-83 Productivity Center for Disk installation package location




140   IBM TotalStorage Productivity Center V2.3: Getting Started
2. The next window that opens indicates that the IBM TotalStorage Productivity Center for
   Disk installer wizard will be launched (see Figure 4-84). Click Next.




Figure 4-84 IBM TotalStorage Productivity Center for Disk installer

3. The Productivity Center for Disk Installer - Welcome panel (see Figure 4-85) opens. Click
   Next.




Figure 4-85 IBM TotalStorage Productivity Center for Disk Installer Welcome



                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   141
4. The Destination Directory panel (Figure 4-86) opens. Enter the directory path or accept
                 the default directory and click Next.




              Figure 4-86 Productivity Center for Disk Installer - Destination Directory

              5. The Installation Type panel (Figure 4-87) opens. Select Typical and click Next.




              Figure 4-87 Productivity Center for Disk - Installation Type




142   IBM TotalStorage Productivity Center V2.3: Getting Started
6. The Create Local Database panel (Figure 4-88) opens. Accept the default database name
   of PMDATA or re-enter a new database name. Then click Next.




Figure 4-88 IBM TotalStorage Productivity Center for Disk - Create Local Database




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   143
7. Review the information on the IBM TotalStorage Productivity Center for Disk – Preview
                 panel (Figure 4-89) and click Install.




              Figure 4-89 IBM TotalStorage Productivity Center for Disk Installer - Preview

              8. The installer creates the required database (see Figure 4-90) and installs the product. You
                 see a progress bar for the Productivity Center for Disk installation status.




              Figure 4-90 Productivity Center for Disk DB2 database creation




144   IBM TotalStorage Productivity Center V2.3: Getting Started
9. When the installation is complete, you see the Finish panel (Figure 4-91). Review the post
   installation tasks. Click Finish.




Figure 4-91 Productivity Center for Disk Installer - Finish

10.The Install Status window (Figure 4-92) opens after the successful Productivity Center for
   Disk installation. Click Next.




Figure 4-92 Install Status for Productivity Center for Disk successful




                             Chapter 4. Installing the IBM TotalStorage Productivity Center suite   145
4.3.5 IBM TotalStorage Productivity Center for Replication
              A panel opens that indicates that the installation for IBM TotalStorage Productivity Center for
              Replication is about to begin (see Figure 4-93). Click Next to begin the installation.




              Figure 4-93 IBM TotalStorage Productivity Center installation overview

              1. The Package Location for Replication Manager panel (Figure 4-94) opens. Enter the
                 appropriate information and click Next.




              Figure 4-94 Productivity Center for Replication install package location




146   IBM TotalStorage Productivity Center V2.3: Getting Started
2. The next window that opens indicates that the IBM TotalStorage Productivity Center for
   Replication installer wizard will be launched (see Figure 4-95). Click Next.




Figure 4-95 Productivity Center for Replication installer

3. The Welcome window (Figure 4-96) opens. It suggests documentation that you can review
   prior to the installation. Click Next to continue or click Cancel to exit the installation.




Figure 4-96 IBM TotalStorage Productivity Center for Replication Installer – Welcome




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   147
4. The Destination Directory panel (Figure 4-97) opens. Enter the directory path or accept
                 the default directory. Click Next to continue.




              Figure 4-97 IBM TotalStorage Productivity Center for Replication Installer – Destination Directory




148   IBM TotalStorage Productivity Center V2.3: Getting Started
5. The next panel (see Figure 4-98) asks you to select the installation type. Select the
   Typical radio button and click Next.




Figure 4-98 Productivity Center for Replication Installer – Installation Type




                             Chapter 4. Installing the IBM TotalStorage Productivity Center suite   149
6. In the Create Local Database for ‘Hardware’ Subcomponent window (see Figure 4-99), in
                 the Database name field, enter a value for the new Hardware subcomponent database or
                 accept the default. We recommend that you accept the default. Click Next.

                   Note: The database name must be unique to the Replication Manager subcomponent.
                   You cannot share the Replication Manager subcomponent database with any other
                   applications or with other Replication Manager subcomponents.




              Figure 4-99 IBM TotalStorage Productivity Center for Replication: Hardware subcomponent




150   IBM TotalStorage Productivity Center V2.3: Getting Started
7. In the Create Local Database for ‘ElementCatalog’ Subcomponent window (see
   Figure 4-100), in the Database name field, enter for the new Element Catalog
   subcomponent database or accept the default. Click Next.

     Note: The database name must be unique to the Replication Manager subcomponent.
     You cannot share the Replication Manager subcomponent database with any other
     applications or with other Replication Manager subcomponents.




Figure 4-100 IBM TotalStorage Productivity Center for Replication: Element Catalog subcomponent




                          Chapter 4. Installing the IBM TotalStorage Productivity Center suite   151
8. In the Create Local Database for ‘ReplicationManager’ Subcomponent window (see
                 Figure 4-101), in the Database name field, enter the new Replication Manager
                 subcomponent database or accept the default. Click Next.

                    Note: The database name must be unique to the Replication Manager subcomponent.
                    You cannot share the Replication Manager subcomponent database with any other
                    applications or with other Replication Manager subcomponents.




              Figure 4-101 TotalStorage Productivity Center for Replication: Replication Manager subcomponent




152   IBM TotalStorage Productivity Center V2.3: Getting Started
9. In the Create Local Database for ‘ReplicationManager’ Subcomponent window (see
   Figure 4-102), in the Database name field, enter the new SVC hardware subcomponent
   database or accept the default. Click Next.

    Note: The database name must be unique to the Replication Manager subcomponent.
    You cannot share the Replication Manager subcomponent database with any other
    applications or with other Replication Manager subcomponents.




Figure 4-102 IBM TotalStorage Productivity Center for Replication: SVC Hardware subcomponent




                          Chapter 4. Installing the IBM TotalStorage Productivity Center suite   153
10.The Setting Tuning Cycle Parameter window (Figure 4-103) opens. Accept the default
                 value of tuning every 24 hours or change the value. You can change this value later in the
                 ElementCatalog.properties file. Click Next.




              Figure 4-103 IBM TotalStorage Productivity Center for Replication: Database tuning cycle




154   IBM TotalStorage Productivity Center V2.3: Getting Started
11.Review the information in the TotalStorage Productivity Center for Replication Installer –
   Preview panel (Figure 4-104). Click Install.




Figure 4-104 IBM TotalStorage Productivity Center for Replication Installer – Preview




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   155
12.You see the Productivity Center for Replication Installer - Finish panel (see Figure 4-105)
                 upon successful installation. Read the post installation tasks. Click Finish to complete the
                 installation.




              Figure 4-105 Productivity Center for Replication Installer – Finish

              13.The Install Status window (Figure 4-106) opens after the successful Productivity Center
                 for Disk and Replication Base installation. Click Next.




              Figure 4-106 Install Status for Productivity Center for Replication successful




156   IBM TotalStorage Productivity Center V2.3: Getting Started
4.3.6 IBM TotalStorage Productivity Center for Fabric
           Prior to installing IBM TotalStorage Productivity Center for Fabric, you must complete several
           prerequisite tasks. These tasks are described in detail in 3.11, “IBM TotalStorage Productivity
           Center for Fabric” on page 75. Specifically, complete the tasks in the following sections:
              3.10, “Installing SNMP” on page 73
              3.11.1, “The computer name” on page 75
              Figure 3.11.2 on page 75
              3.11.3, “Windows Terminal Services” on page 75
              “User IDs and password considerations” on page 76
              3.11.4, “Tivoli NetView” on page 76
              3.11.5, “Personal firewall” on page 77
              “Security considerations” on page 77

           Installing the manager
           After successful installation of the Productivity Center for Replication, the Suite Installer
           begins the installation of Productivity Center for Fabric (see Figure 4-107). Click Next.




           Figure 4-107 IBM TotalStorage Productivity Center installation information




                                       Chapter 4. Installing the IBM TotalStorage Productivity Center suite   157
1. The panel that opens prompts you to specify the location of the install package for
                 Productivity Center for Fabric Manager (see Figure 4-108). Enter the appropriate path and
                 click Next.

                    Important: If you used the demonstration certificates, point to the CD-ROM drive. If
                    you generated new certificates, point to the manager CD image with the new
                    agentTrust.jks file.




              Figure 4-108 Productivity Center for Fabric install package location




158   IBM TotalStorage Productivity Center V2.3: Getting Started
2. The next window that opens indicates that the IBM TotalStorage Productivity Center for
   Fabric installer wizard will be launched (see Figure 4-109). Click Next.




Figure 4-109 Productivity Center for Fabric installer

3. A window opens in which you select the language to use for the wizard (see Figure 4-110).
   Select the required language and click OK.




Figure 4-110 IBM TotalStorage Productivity Center for Fabric installer: Selecting the language




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   159
4. A panel opens asking you to select the type of installation you wish to perform
                 (Figure 4-111).
                  In this case, we install the IBM TotalStorage Productivity Center for Fabric code. You can
                  also use the Suite Installer to perform remote deployment of the Fabric Agent. You can
                  perform this operation only if you installed the common agent previously on machines. For
                  example, you may have installed the Data Agent on the machines and want to add the
                  Fabric Agent to the same machines. You must have the Fabric Manager installed before
                  you can deploy the Fabric Agent. You cannot select both Fabric Manager Installation and
                  Remote Fabric Agent Deployment at the same time. You can only select one option.
                  Click Next.




              Figure 4-111 Fabric Manager installation




160   IBM TotalStorage Productivity Center V2.3: Getting Started
5. The Welcome panel (Figure 4-112) opens. Click Next.




Figure 4-112 IBM TotalStorage Productivity Center for Fabric: Welcome information

6. The next window that opens prompts you to confirm the target directory (see
   Figure 4-113). Enter the directory path or accept the default directory. Click Next.




Figure 4-113 IBM TotalStorage Productivity Center for Fabric installation directory




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   161
7. In the next panel (see Figure 4-114), you specify the port number. This is a range of 25
                 port numbers for use by IBM TotalStorage Productivity Center for Fabric. The first port
                 number that you specify is considered the primary port number. You only need to enter
                 the primary port number. The primary port number and the next 24 numbers are reserved
                 for use by IBM TotalStorage Productivity Center for Fabric. For example, if you specify port
                 number 9550, IBM TotalStorage Productivity Center for Fabric uses port numbers 9550
                 through 9574.
                  Ensure that the port numbers you use are not used by other applications at the same time.
                  To determine which port numbers are in use on a particular computer, type either of the
                  following commands from a command prompt.
                  netstat -a
                  netstat -an
                  We recommend that you use the first of these two commands.
                  The port numbers in use on the system are listed in the Local Address column of the
                  output. This field has the format host:port. Enter the primary port number and click Next.




              Figure 4-114 IBM TotalStorage Productivity Center for Fabric port number




162   IBM TotalStorage Productivity Center V2.3: Getting Started
8. As shown in Figure 4-115, select the database repository, either DB2 or Cloudscape. If
   you select DB2, you must have previously installed DB2 on the server. DB2 is the
   recommended installation option. Click Next.




Figure 4-115 IBM TotalStorage Productivity Center for Fabric database selection type

9. In the next panel (see Figure 4-117 on page 164), select the WebSphere Application
   Server to use in the installation. WebSphere Application Server was installed as part of
   the prerequisite software so we chose the Non Embedded (Full) WebSphere
   Application Server option. If the Fabric manager is to be installed standalone on a server
   choose the Embedded WebSphere Application Server - Express option. Click Next.




Figure 4-116 Productivity Center for Fabric WebSphere Application Server type selection




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   163
10.The Single/Multiple User ID/Password Choice panel (see Figure 4-117), using DB2,
                 opens. If you select DB2 as your database, you see this panel. This panel allows you to
                 use the DB2 administrative user ID and password for the DB2, WebSphere, Host
                 Authentication, and NetView.
                  If you select all the boxes, you are only prompted for the DB2 user ID and password which
                  is used for all instances. In our install we only selected DB2 and NetView. A different user
                  ID and password will be used for WebSphere and Host Authentication.

                   Note: If you selected IBM Cloudscape as your database, this panel is not displayed.

                  Click Next.




              Figure 4-117 IBM TotalStorage Productivity Center for Fabric user and password options




164   IBM TotalStorage Productivity Center V2.3: Getting Started
11.The DB2 Administrator user ID and password panel (Figure 4-118), using DB2, opens. If
   you selected DB2 as your database, you see this panel. This panel allows you to use the
   DB2 administrative user ID and password for the DB2. The user ID and password
   specified during the DB2 installation in Figure 4-8 on page 90 was used in this example.
   Enter the required user ID and password. Click Next. The installer will verify that the user
   ID entered exists.




Figure 4-118 IBM TotalStorage Productivity Center for Fabric database user information




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   165
12.In the next window (see Figure 4-119) that opens, type the name of the new database in
                 the Type database name: field or accept the default. In our install we accepted the default
                 database name. Click Next.

                   Note: The database name must be unique. You cannot share the IBM TotalStorage
                   Productivity Center for Fabric database with any other applications.




              Figure 4-119 IBM TotalStorage Productivity Center for Fabric database name

              13.Since we did not check the box for WebSphere in Figure 4-117 on page 164, the panel in
                 Figure 4-120 on page 167 opens prompting for a WebSphere user ID and password. We
                 used the tpcadmin user ID, which is what we used for the IBM Director service account
                 (refer to Figure 4-46 on page 118). Enter the required information and click Next.




166   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 4-120 WebSphere Application Server user ID and password

14.Since we also did not check the box for Host Authentication (Figure 4-117 on page 164),
   the following panel (Figure 4-121) opens. Enter the password for Host Authentication. This
   password is used by the Fabric agents. Click Next.




Figure 4-121 Host Authentication password

15.In the window (Figure 4-122 on page 168) that opens, enter the parameters for the Tivoli
   NetView drive name. Click Next.




                          Chapter 4. Installing the IBM TotalStorage Productivity Center suite   167
Figure 4-122 IBM TotalStorage Productivity Center for Fabric database drive information

              16.The Agent Manager Information panel (Figure 4-123 on page 169) opens. You must
                 complete the following fields:
                  – Agent manager name or IP address: This is the host name or IP address of your
                    Agent Manager.
                  – Agent manager registration port: This is the port number of your Agent Manager.
                    The default value is 9511.
                  – Agent Manager public port: This is a public port. The default value is 9513.
                  – Agent registration password (twice): This is the password used to register the
                    common agent with the Agent Manager as shown in Figure 4-27 on page 105. If the
                    password is not set and the default is accepted, the password is changeMe. This
                    password is case sensitive.
                     The agent registration password resides in the AgentManager.properties file where the
                     Agent Manager is installed. It is located in the following directory:
                         %WSAS_INSTALL_ROOT%InstalledApps<cell>AgentManager.earAgentManag
                         er.warWEB-INFclassesresource
                  – Resource manager registration user ID: This is the user ID used to register the
                    resource manager with the Agent Manager. The default is manager.
                     The Resource Manager registration user ID and password reside in the
                     Authorization.xml file where the Agent Manager is installed. It is located in the following
                     directory:
                         <Agent_Manager_install_dir>config
                  – Resource manager registration password (twice): This is the password used to
                    register the resource manager with the Agent Manager. The default is password.
                  Fill in the information and click Next.




168   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 4-123 IBM TotalStorage Productivity Center for Fabric Agent Manager information

17.The next panel (Figure 4-124) that opens provides information about the location and size
   of IBM TotalStorage Productivity Center for Fabric - Manager. Click Next.




Figure 4-124 IBM TotalStorage Productivity Center for Fabric installation information

18.You see the Status panel. The installation can take about 15 to 20 minutes to complete.
19.When the installation has completed, you see a panel indicating that the wizard
   successfully installed the Fabric Manager (see Figure 4-125 on page 170). Click Next.




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   169
Figure 4-125 IBM TotalStorage Productivity Center for Fabric installation status

              20.In the next panel (see Figure 4-126), you are prompted to restart your computer. Select
                 No, I will restart my computer later because you do not want to restart your computer
                 now. Click Finish to complete the installation.




              Figure 4-126 IBM TotalStorage Productivity Center for Fabric restart options

              21.The Install Status panel (see Figure 4-127 on page 171) opens. It indicates that the
                 Productivity Center for Fabric installation was successful. Click Next.




170   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 4-127 IBM TotalStorage Productivity Center installation information


4.3.7 IBM TotalStorage Productivity Center for Data
           Prior to installing IBM TotalStorage Productivity Center for Data, you need to complete
           several prerequisite tasks. These tasks are described in detail in 3.12, “IBM TotalStorage
           Productivity Center for Data” on page 78. Specifically you must complete the tasks in the
           following sections:
              3.12.1, “Server recommendations” on page 78
              3.12.2, “Supported subsystems and databases” on page 78
              3.12.3, “Security considerations” on page 79
              3.12.4, “Creating the DB2 database” on page 81
              The IBM TotalStorage Productivity Center for Data database needs to be created before
              you begin the installation.

           This section provides an overview of the steps you need to perform when installing IBM
           TotalStorage Productivity Center for Data.

            Important: Make sure that the Tivoli Agent Manager service is started before you begin
            the installation.

           You see the panel indicating that the installation of Productivity Center for Data - Manager is
           about to begin (see Figure 4-128 on page 172). Click Next to begin the installation.




                                       Chapter 4. Installing the IBM TotalStorage Productivity Center suite   171
Figure 4-128 IBM TotalStorage Productivity Center for Data installation information

              1. In the window that opens, you are prompted to enter the install package location for IBM
                 TotalStorage Productivity Center for Data (see Figure 4-129). Enter the appropriate
                 information and click Next.




              Figure 4-129 Productivity Center for Data install package location

              2. The next window that opens indicates that the IBM TotalStorage Productivity Center for
                 Data installer wizard will be launched (see Figure 4-130 on page 173). Click Next.




172   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 4-130 Productivity Center for Data installer

3. In the next panel (see Figure 4-131), select Install Productivity Center for Data and click
   Next.




Figure 4-131 IBM TotalStorage Productivity Center for Data install window




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   173
4. Read the License Agreement shown in Figure 4-132. Indicate your acceptance of the
                 agreement by selecting the I have read and AGREE to abide by the license agreement
                 above check box. Then click Next.




              Figure 4-132 IBM TotalStorage Productivity Center for Data license agreement

              5. The next panel asks you to confirm that you read the license agreement (see
                 Figure 4-133). Click Yes to indicate that you have read and accepted the license
                 agreement.




              Figure 4-133 Confirmation the Productivity Center for Data license agreement has been read




174   IBM TotalStorage Productivity Center V2.3: Getting Started
6. The next window shown in Figure 4-134 allows you to choose the type of installation that
   you are performing. Select The Productivity Center for Data Server and an Agent on
   this machine.
   This installs the server, agent, and user interface components on the machine where the
   installation program is running. You must install the server on at least one machine within
   your environment.
   Click Next.




Figure 4-134 IBM TotalStorage Productivity Center for Data selection options




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   175
7. Review and enter the license key for the appropriate functions if required. See
                 Figure 4-135. Click Next.




              Figure 4-135 IBM TotalStorage Productivity Center for Data license key information




176   IBM TotalStorage Productivity Center V2.3: Getting Started
8. The installation program validates the license key and you are asked to select the
   relational database management system (RDBMS) that you want to host the Data
   Manager repository. See Figure 4-136.
   The repository is a set of relational database tables where Data Manager builds a
   database of statistics to keep track of your environment. For our installation, we select IBM
   DB2 UDB. Click Next.




Figure 4-136 IBM TotalStorage Productivity Center for Data database selection

9. The Create Service Account panel opens to create the TSRMsrv1 local account. Click
   Yes.




Figure 4-137 Create Service Account




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   177
10.In the next window (see Figure 4-138), complete these tasks:
                  a. Select the database that was created as a prerequisite. Refer to 3.12.4, “Creating the
                     DB2 database” on page 81.
                  b. Fill in the required user ID and password. This is the DB2 user ID and password
                     defined previously.
                  c. Click Next.




              Figure 4-138 IBM TotalStorage Productivity Center for Data database selection option

              11.The Repository Creation Parameters panel (see Figure 4-139 on page 179) for UDB
                 opens. On this panel you can specify the database schema and tablespace name.
                  If you are using DB2 as the repository, you can also choose how you will manage the
                  database space:
                  – System Managed (SMS): This option indicates that the space is managed by the OS.
                    In this case you specify the Container Directory, which is then managed by the system,
                    and can grow as large as the free space on the file system.

                       Tip: If you do not have in house database skills, the System Managed approach is
                       recommended.

                  – Database Managed (DMS): This option means that the space is managed by the
                    database. In this case you need to specify the Container Directory, Container File, and
                    Size fields. The Container File specifies a filename for the repository, and Size is the
                    predefined space for that file. You can later change this by using the ALTER
                    TABLESPACE command. We accepted the defaults.




178   IBM TotalStorage Productivity Center V2.3: Getting Started
Tip: We recommend that you use meaningful names for Container Directory and
        Container File at installation. This can help you in case you need to find the
        Container File.

   Enter the necessary information and click Next.




Figure 4-139 IBM TotalStorage Productivity Center for Data repository information




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   179
12.The Productivity Center for Data Parameters panel (Figure 4-140) opens. Use the Agent
                 Manager Parameters window (Figure 4-141 on page 182) to provide information about the
                 Agent Manager installed in your environment. Click Next.




              Figure 4-140 IBM TotalStorage Productivity Center for Data installation parameters




180   IBM TotalStorage Productivity Center V2.3: Getting Started
13.The Agent Manager Parameters panel (Figure 4-141 on page 182) provides information
   about the Agent Manager installed in your environment. Table 4-1 provides a description
   of the fields in the panel.

Table 4-1 Agent Manager Parameters descriptions
 Field                           Description

 Hostname                        Enter the fully qualified network name or IP address of the Agent
                                 Manager server as seen by the agents.

 Registration Port               Enter the port number of the Agent Manager. The default is 9511.

 Public Port                     Enter the public port for Agent Manager. The default is 9513.

 Resource Manager Username       Enter the Agent Manager user ID. This is the user ID used to
                                 register the common agent with the Agent Manager. The default is
                                 manager.

 Resource Manager Password       Enter the password used to register the common agent with the
                                 Agent Manager.

 Agent Registration password     This is the password that was set during the Tivoli Agent Manager
                                 installation Figure 4-27 on page 105. The default password is
                                 changeMe, the password is stored in the AgentManager.properties
                                 file, in the %install dir%AgentManagerimage directory.

   Click Next.

    Note: If an error is displayed during this part of the installation, verify that the
    Agenttrust.jks file was copied across and verify the Agent Registration password.




                          Chapter 4. Installing the IBM TotalStorage Productivity Center suite   181
Figure 4-141 IBM TotalStorage Productivity Center for Data Agent Manager install information




182   IBM TotalStorage Productivity Center V2.3: Getting Started
14.Use the NAS Discovery Parameters panel in Figure 4-142 to configure Data Manager for
   use with any network-attached storage (NAS) devices in your environment. Click Next.
   You can leave the fields blank if you do not have any NAS devices.




Figure 4-142 IBM TotalStorage Productivity Center for Data NAS options




                          Chapter 4. Installing the IBM TotalStorage Productivity Center suite   183
15.The Space Requirements panel for the Productivity Center for Data Server (Figure 4-143)
                 opens. Enter the directory path or accept the default directory.
                  If the current disk or device does not have enough space for the installation, then you can
                  enter a different location for the installation in the Choose the installation directory field. Or
                  you can click Browse to browse your system for an available and appropriate space. The
                  default installation directory is C:Program FilesIBMTPCData.
                  Click Next.




              Figure 4-143 IBM TotalStorage Productivity Center for Data installation destination options

              16.Confirm the path for installing the Productivity Center for Data Server as shown in
                 Figure 4-144. At this point, the installation process has gathered all of the information that
                 is needed to perform the installation. Click OK.




              Figure 4-144 IBM TotalStorage Productivity Center for Data Server destination path confirmation




184   IBM TotalStorage Productivity Center V2.3: Getting Started
17.Review and change the Productivity Center for Data Agent Parameters (see Figure 4-145)
   as required. We recommend that you accept the defaults. Click Next.




Figure 4-145 IBM TotalStorage Productivity Center for Data agent parameters




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   185
18.The Windows Service Account panel shown in Figure 4-146 opens. Choose Create a local
                 account for the agent to run under and click Next.




              Figure 4-146 Windows Service Account




186   IBM TotalStorage Productivity Center V2.3: Getting Started
19.The Space Requirements panel (see Figure 4-147) opens for the Productivity Center for
   Data Agent. Enter the directory path or accept the default directory.
   If the current disk or device does not have enough space for the installation, then you can
   enter a different location for the installation in the Choose the Common Agent installation
   directory field. Or you can click Browse to browse your system for an available and
   appropriate space. The default installation directory is C:Program FilesTivoliep.
   Click Next.




Figure 4-147 IBM TotalStorage Productivity Center for Datacommon agent installation information

20.When you see a message similar to the one in Figure 4-148, confirm the path where
   Productivity Center for Data Agent is to be installed. At this point, the installation process
   has gathered all of the information necessary to perform the installation. Click OK.




Figure 4-148 IBM TotalStorage Productivity Center for Data Agent destination path confirmation




                           Chapter 4. Installing the IBM TotalStorage Productivity Center suite   187
21.When you see a window similar to the example in Figure 4-149, review the choices that
                 you have made. Then click Next.




              Figure 4-149 IBM TotalStorage Productivity Center for Data preview options




188   IBM TotalStorage Productivity Center V2.3: Getting Started
22.A window opens that tracks the progress of the installation (see Figure 4-150).




Figure 4-150 IBM TotalStorage Productivity Center for Data installation information




                            Chapter 4. Installing the IBM TotalStorage Productivity Center suite   189
23.When the installation is done, the progress window shows a message indicating that the
                 installation completed successfully (see Figure 4-151). Review this panel and click Done.




              Figure 4-151 IBM TotalStorage Productivity Center for Data success information

              24.The Install Status panel opens showing the message The Productivity Center for Data
                 installation was successful. Click Next to complete the installation.




              Figure 4-152 Install Status for Productivity Center for Data successful




190   IBM TotalStorage Productivity Center V2.3: Getting Started
5


    Chapter 5.   CIMOM install and configuration
                 This chapter provides a step-by-step guide to configure the Common Information Model
                 Object Manager (CIMOM), LSI Provider, and Service Location Protocol (SLP) that are
                 required to use the IBM TotalStorage Productivity Center.




© Copyright IBM Corp. 2005. All rights reserved.                                                        191
5.1 Introduction
              After you have completed the installation of TotalStorage Productivity Center for Disk,
              TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Fabric,
              or TotalStorage Productivity Center for Data, you will need to install and configure the
              Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP)
              agents.

                Note: For the remainder of this chapter, we refer to the TotalStorage Productivity Center
                for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center
                for Fabric, and TotalStorage Productivity Center for Data simply as the TotalStorage
                Productivity Center.

              The TotalStorage Productivity Center uses SLP as the method for CIM clients to locate
              managed objects. The CIM clients may have built in or external CIM agents. When a CIM
              agent implementation is available for a supported device, the device may be accessed and
              configured by management applications using industry-standard XML-over-HTTP
              transactions.

              In this chapter we describe the steps for:
                  Planning considerations for Service Location Protocol (SLP)
                  SLP configuration recommendation
                  General performance guidelines
                  Planning considerations for CIMOM
                  Installing and configuring CIM agent for Enterprise Storage Server and DS6000/DS8000
                  Verifying connection to ESS
                  Verify connection to DS6000/DS8000
                  Setting up Service Location Protocol Directory Agent (SLP DA)
                  Installing and configuring CIM agent for DS 4000 Family
                  Configuring CIM agent for SAN Volume Controller



5.2 Planning considerations for Service Location Protocol
              The Service Location Protocol (SLP) has three major components, Service Agent (SA) and
              User Agent (UA) and a Directory Agent (DA). The SA and UA are required components and
              DA is an optional component.

              You may have to make a decision whether to use SLP DA in your environment based on
              considerations as described below.


5.2.1 Considerations for using SLP DAs
              You may consider to use a DA to reduce the amount of multicast traffic involved in service
              discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved
              in service discovery can become so large that network performance degrades.




192   IBM TotalStorage Productivity Center V2.3: Getting Started
By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register
          with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active
          and passive DA discovery.

          SAs register automatically with any DAs they discover within a set of common scopes.
          Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for
          normal UA request, delays and time-outs are eliminated.

          DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection
          of scopes provides a centralized point for monitoring SLP activity.

          You may consider to use DAs in your enterprise if any of the following conditions are true:
             Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by
             snoop.
             UA clients experience long delays or time-outs during multicast service request.
             You would like to centralize monitoring of SLP service advertisements for particular
             scopes on one or several hosts. You can deploy any number of DAs for a particular scope
             or scopes, depending on the need to balance the load.
             Your network does not have multicast enabled and consists of multiple subnets that must
             share services.
             The configuration of an SLP DA is particularly recommended when there are more than 60
             SAs that need to respond to any given multicast service request.


5.2.2 SLP configuration recommendation
          Some configuration recommendations are provided for enabling TotalStorage Productivity
          Center to discover a larger set of storage devices. These recommendations cover some of
          the more common SLP configuration problems.

          This topic discusses router configuration and SLP directory agent configuration.

          Router configuration
          Configure the routers in the network to enable general multicasting or to allow multicasting for
          the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are
          those that are associated with subnets that contain one or more storage devices that are to
          be discovered and managed by TotalStorage Productivity Center. To configure your router
          hardware and software, refer to your router reference and configuration documentation.

           Attention: Routers are sometimes configured to prevent passing of multicast packets
           between subnets. Routers configured this way prevent discovery of systems between
           subnets using multicasting. Routers can also be configured to restrict the minimum
           multicast TTL (time-to-live) for packets it passes between subnets, which can result in the
           need to set the Multicast TTL higher to discover systems on the other subnets of the router.

           The Multicast TTL controls the time-to-live for the multicast discovery packets. This value
           typically corresponds to the number of times a packet is forwarded between subnets,
           allowing control of the scope of subnets discovered.

           Multicast discovery does not discover Director V1.x systems or systems using TCP/IP
           protocol stacks that do not support multicasting (for example, some older Windows 3.x and
           Novell 3.x TCP/IP implementations).




                                                          Chapter 5. CIMOM install and configuration    193
SLP directory agent configuration
              Configure the SLP directory agents (DAs) to circumvent the multicast limitations. With
              statically configured DAs, all service requests are unicast by the user agent. Therefore, it is
              possible to configure one DA for each subnet that contains storage devices that are to be
              discovered byTotalStorage Productivity Center. One DA is sufficient for each subnet. Each of
              these DAs can discover all services within its own subnet, but no other services outside its
              own subnet. To allow TotalStorage Productivity Center to discover all of the devices, it needs
              to be statically configured with the addresses of each of these DAs. This can be
              accomplished using the TotalStorage Productivity Center This setup is described in
              “Configuring TotalStorage Productivity Center for SLP discovery” on page 223.



5.3 General performance guidelines
              Here are some general performance considerations for configuring the TotalStorage
              Productivity Center for Disk and TotalStorage Productivity Center for Replication
              environment.
                  Do not overpopulate the SLP discovery panel with SLP agent hosts. Remember that
                  TotalStorage Productivity Center for Disk includes a built-in SLP User Agent (UA) that will
                  receive information about SLP Service Agents and Directory Agents (DA) that reside in
                  the same subnet as the TotalStorage Productivity Center for Disk installation.
                  You should have not more than one DA per subnet.
                  Misconfiguring the IBM Director discovery preferences may impact performance on auto
                  discovery or on device presence checking. It may also result in application time-outs, as
                  attempts are made to resolve and communicate with hosts that are not available.
                  It should be considered mandatory to run the ESS CLI and ESS CIM agent or DS CIM
                  agent, and LSI Provider software on another host of comparable size to the main
                  TotalStorage Productivity Center server. Attempting to run a full TotalStorage Productivity
                  Center implementation (Disk Manager, Data Manager, Fabric Manager, Replication
                  Manager, DB2, IBM Director and the WebSphere Application server) on the same host as
                  the CIM agent, will result in dramatically increased wait times for data retrieval. You may
                  also experience resource contention and port conflicts.



5.4 Planning considerations for CIMOM
              The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using
              a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in
              storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be
              installed on the same server that supports the device user interface. Figure 5-1 on page 195
              shows overview of CIM agent.




194   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 5-1 CIM Agent overview

          You may plan to install CIM agent code on the same server which also has device
          management interface or you may install it on a separate server.

           Attention: At this time only few devices come with an integrated CIM Agent, most devices
           need a external CIMOM for CIM enable management applications (CIM Clients) to be able
           to communicate with device.

           For the ease of the installation IBM provides an ICAT (short for Integrated Configuration
           Agent Technology) which is a bundle that mainly includes the CIMOM, the device provider
           and an SLP SA.


5.4.1 CIMOM configuration recommendations
          Following recommendations are based on our experience in ITSO Lab environment:
             The CIMOM agent code which you are planning to use, must be supported by the installed
             version of TotalStorage Productivity Center. You may refer to the link below for the latest
             updates:
                http://www-1.ibm.com/servers/storage/support/software/tpc/
             You must have the CIMOM supported firmware level on the storage devices. It you have
             an incorrect version of firmware, you may not be able to discover and manage any the
             storage devices.
             The data traffic between CIMOM agent and device can be very high, especially during
             performance data collection. Hence it is recommended to have a dedicated server for the
             CIMOM agent. Although, you may configure the same CIMOM agent for multiple devices
             of same type.
             You may also plan to locate this server within same data center where storage devices are
             located. This is in consideration to firewall port requirements. Typically, it is best practice
             to minimize firewall port openings between data center and external network. If you
             consolidate the CIMOM servers within the data center then you may be able to minimize
             the need to open the firewall ports only for TotalStorage Productivity Center
             communication with CIMOM.
             Co-location of CIM agent instances of the differing type on the same server is not
             recommended because of resource contention.
             It is strongly recommended to have a separate and dedicated servers for CIMOM agents
             and TotalStorage Productivity Center. This is due to resource contention, TCP/IP port
             requirements and system services co-existence.




                                                           Chapter 5. CIMOM install and configuration   195
5.5 Installing CIM agent for ESS
              Before starting Multiple Device Manager discovery, you must first configure the Common
              Information Model Object Manager (CIMOM) for ESS.

              The ESS CIM Agent package is made up of the following parts (see Figure 5-2).




              Figure 5-2 ESS CIM Agent Package

              This section provides an overview of the installation and configuration of the ESS CIM Agent
              on a Windows 2000 Advanced Server operating system.


5.5.1 ESS CLI Install
              The following list of installation and configuration tasks are in the order in which they should
              be performed:
                  Before you install the DS CIM Agent you must install the IBM TotalStorage Enterprise
                  Storage System Command Line Interface (ESS CLI) if you plan to manage 2105-F20s or
                  2105-800s with this CIM agent. The DS CIM Agent installation program checks your
                  system for the existence of the ESS CLI and provides the warning shown in Figure 5-16
                  on page 205 if no valid ESS CLI is found.

                Attention: If you are upgrading from a previous version of the ESS CIM Agent, you must
                uninstall the ESS CLI software that was required by the previous CIM Agent and reinstall
                the latest ESS CLI software, you must have a minimum ESS CLI level of 2.4.0.236.


              Perform the following steps to install the ESS CLI for Windows:
              1. Insert the CD for the ESS CLI in the CD-ROM drive, run the setup and follow the
                 instructions as shown in Figure 5-3 on page 197 through Figure 5-11 on page 201.

                Note: The ESS CLI installation wizard detects if you have an earlier level of the ESS CLI
                software installed on your system and uninstalls the earlier level. After you uninstall the
                previous version, you must restart the ESS CLI installation program to install the current
                level of the ESS CLI.



196   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 5-3 ESS CLI InstallShield Wizard I

2. Select I accept the terms of the license agreement and click Next.




Figure 5-4 ESS CLI License agreement

3. Click Next.




                                            Chapter 5. CIMOM install and configuration   197
Figure 5-5 ESS CLI choose target system panel

              4. Click Next.




              Figure 5-6 ESS CLI Setup Status panel

              5. Click Next.




198   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 5-7 ESS CLI selected options summary




Figure 5-8 ESS CLI Installation Progress

6. Click Next.




                                              Chapter 5. CIMOM install and configuration   199
Figure 5-9 ESS CLI installation complete panel

              7. Read the information and click Next.




              Figure 5-10 ESS CLI Readme

              8. Reboot your system before proceeding with the ESS CIM Agent installation. You must do
                 this because the ESS CLI is dependent on environmental variable settings which will not
                 be in effect for the ESS CIM Agent. This is because the CIM Agent runs as a service
                 unless you reboot your system.



200   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 5-11 ESS CLI Reboot panel

9. Verify that the ESS CLI is installed:
   – Click Start → Settings → Control Panel.
   – Double-click the Add/Remove Programs icon.
   – Verify that there is an IBM ESS CLI entry.
10.Verify that the ESS CLI is operational and can connect to the ESS. For example, from a
   command prompt window, issue the following command:
   esscli -u userid -p password -s 9.1.11.111 list server
   Where:
   – 9.1.11.111 represents the IP address of the Enterprise Storage Server
   – usedid represents the Enterprise Storage Server Specialist user name
   – password represents the Enterprise Storage Server Specialist password for the user
     name

Figure 5-12 shows the response from the esscli command.




Figure 5-12 ESS CLI verification




                                              Chapter 5. CIMOM install and configuration   201
5.5.2 DS CIM Agent install
              To install the DS CIM Agent in your Windows system, perform the following steps:
              1. Log on to your system as the local administrator.
              2. Insert the CIM Agent for DS CD into the CD-ROM drive. The Install Wizard launchpad
                 should start automatically, if you have autorun mode set on your system. You should see
                 a launchpad window similar to Figure 5-13.
              3. You may review the Readme file from the launchpad menu. Subsequently, you can click
                 Installation Wizard. The Installation Wizard starts executing the setup.exe program and
                 shows the Welcome panel in Figure 5-14 on page 203.

                Note: The DS CIM Agent program should start within 15 - 30 seconds if you have autorun
                mode set on your system. If the installer window does not open, perform the following
                steps:

                  – Use a Command Prompt or Windows Explorer to change to the Windows directory on
                    the CD.
                  – If you are using a Command Prompt window, run launchpad.bat.
                  – If you are using Windows Explorer, double-click on the launchpad.bat file.

                Note: If you using CIMOM code from IBM download Web site and not from the distribution
                CD, then you must ensure to use a shorter windows directory pathname. Executing
                Launchpad.bat from the longer pathname may fail. An example of a short pathname is
                C:CIMOMsetup.exe.




              Figure 5-13 DSCIM Agent launchpad



202   IBM TotalStorage Productivity Center V2.3: Getting Started
4. The Welcome window opens suggesting what documentation you should review prior to
   installation. Click Next to continue (see Figure 5-14).




Figure 5-14 DS CIM Agent welcome window




                                           Chapter 5. CIMOM install and configuration   203
5. The License Agreement window opens. Read the license agreement information. Select
                 I accept the terms of the license agreement, then click Next to accept the license
                 agreement (see Figure 5-15).




              Figure 5-15 DS CIM Agent license agreement




204   IBM TotalStorage Productivity Center V2.3: Getting Started
The window shown in Figure 5-16 only appears if no valid ESS CLI installed. If you do not
plan to manage an ESS from this CIM agent, then click Next.

 Important: If you plan to manage an ESS from this CIM agent, then click Cancel. Install
 the ESS CLI following the instructions in 5.5.1, “ESS CLI Install” on page 196.




Figure 5-16 DS CIM Agent ESS CLI warning




                                              Chapter 5. CIMOM install and configuration   205
6. The Destination Directory window opens. Accept the default directory and click Next (see
                 Figure 5-17).




              Figure 5-17 DS CIM Agent destination directory panel




206   IBM TotalStorage Productivity Center V2.3: Getting Started
7. The Updating CIMOM Port window opens (see Figure 5-18). You Click Next to accept the
   default port if it available and free in your environment. For our ITSO setup we used default
   port 5989.

    Note: If the default port is the same as another port already in use, modify the default
    port and click Next. Use the following command to check which ports are in use:
    netstat -a




Figure 5-18 DS CIM Agent port window




                                                Chapter 5. CIMOM install and configuration   207
8. The Installation Confirmation window opens (see Figure 5-19). Click Install to confirm the
                 installation location and file size.




              Figure 5-19 DS CIM Agent installation confirmation




208   IBM TotalStorage Productivity Center V2.3: Getting Started
9. The Installation Progress window opens (see Figure 5-20) indicating how much of the
   installation has completed.




Figure 5-20 DS CIM Agent installation progress

10.When the Installation Progress window closes, the Finish window opens (see Figure 5-21
   on page 210). Check the View post installation tasks check box if you want to view the
   post installation tasks readme when the wizard closes. We recommend you review the
   post installation tasks. Click Finish to exit the installation wizard (Figure 5-21 on
   page 210).

 Note: Before proceeding, you might want to review the log file for any error messages. The
 log file is located in xxxlogsinstall.log, where xxx is the destination directory where the DS
 CIM Agent for Windows is installed.




                                                 Chapter 5. CIMOM install and configuration   209
Figure 5-21 DS CIM Agent install successful

              11.If you checked the view post installation tasks box, then the window shown in Figure 5-22
                 appears. Close the window when you have finished reviewing the post installation tasks.




              Figure 5-22 DS CIM Agent post install readme

              The launchpad window (Figure 5-13 on page 202) appears. Click Exit.




210   IBM TotalStorage Productivity Center V2.3: Getting Started
5.5.3 Post Installation tasks
            Continue with the following post installation tasks for the ESS CIM Agent.

            Verify the installation of the SLP
            Proceed as follows:
               Verify that the Service Location Protocol is started. Select Start → Settings → Control
               Panel. Double-click the Administrative Tools icon. Double-click the Services icon.
               Find Service Location Protocol in the Services window list. For this component, the Status
               column should be marked Started as shown in Figure 5-23.




            Figure 5-23 Verify Service Location Protocol started

               If SLP is not started, right-click on the SLP and select Start from the pop-up menu. Wait for
               the Status column to be changed to Started.

            Verify the installation of the DS CIM Agent
            Proceed as follows:
               Verify that the CIMOM service is started. If you closed the Services window, select Start
                → Settings → Control Panel. Double-click the Administrative Tools icon. Double-click
               the Services icon.
               Find the IBM CIM Object Manager - ESS in the Services window list. For this component,
               the Status column should be marked Started and the Startup Type column should be
               marked Automatic, as shown in Figure 5-24 on page 212.




                                                              Chapter 5. CIMOM install and configuration   211
Figure 5-24 DS CIM Object Manager started confirmation

                  If the IBM CIM Object Manager is not started, right-click on the IBM CIM Object Manager -
                  ESS and select Start from the pop-up menu. Wait for the Status column to change to
                  Started.

              If you are able to perform all of the verification tasks successfully, the ESS CIM Agent has
              been successfully installed on your Windows system. Next, perform the configuration tasks.



5.6 Configuring the DS CIM Agent for Windows
              This task configures the DS CIM Agent after it has been successfully installed.

              Perform the following steps to configure the DS CIM Agent:
                  Configure the ESS CIM Agent with the information for each Enterprise Storage Server the
                  ESS CIM Agent is to access.
                  – Start → Programs → IBM TotalStorage CIM Agent for ESS → CIM agent for the
                    IBM TotalStorage DS Open API → Enable DS Communications as shown in
                    Figure 5-25.




              Figure 5-25 Configuring the ESS CIM Agent


5.6.1 Registering DS Devices
              Type the following command for each DS server that is configured:
                  addessserver <ip> <user> <password>

              Where:
                  – <ip> represents the IP address of the Enterprise Storage Server
                  – <user> represents the DS Storage Server user name
                  – <password> represents the DS Storage Server password for the user name


212   IBM TotalStorage Productivity Center V2.3: Getting Started
Repeat the previous step for each additional DS device that you want to configure.

            Note: CIMOM collects and caches the information from the defined DS servers at startup
            time; the starting of the CIMOM might take a longer period of time the next time you start it.



            Attention: If the username and password entered is incorrect or the DS CIM agent does
            not connect to the DS this will cause a error and the DS CIM Agent will not start and stop
            correctly, use following command to remove the ESS entry that is causing the problem and
            reboot the server.
               – rmessserver <ip>

            Whenever you add or remove DS from CIMOM registration, you must re-start the CIMOM
            to pick up the updated DS device list.


5.6.2 Registering ESS Devices
           Proceed as follows:
              Type the command addess <ip> <user> <password> command for each ESS (as shown
              in Figure –):
              – Where: <ip> represents the IP address of the cluster of Enterprise Storage Server
              – <user> represents the Enterprise Storage Server Specialist user name
              – <password> represents the Enterprise Storage Server Specialist password for the user
                name. The addess command example is shown in Figure 5-26 on page 214.

            Important:
               DS CIM agent relies on ESS CLI connectivity from DS CIMOM server to ESS devices.
               Make sure that the ESS devices you are registering are reachable and available at this
               point. It is recommended to verify this by launching ESS specialist browser from the
               ESS CIMOM server. You may logon to both ESS clusters for each ESS and make sure
               you are authenticated with correct ESS passwords and IP addresses.
               If the ESS are on the different subnet than the ESS CIMOM server and behind a
               firewall, then you must authenticate through firewall first before registering the ESS with
               CIMOM.
               If you have a bi-directional firewall between ESS devices and CIMOM server then you
               must verify the connection using rsTestConnection command of ESS CLI code. If the
               ESS CLI connection is not successful, you must authenticate through the firewall in
               both directions i.e from ESS to CIMOM server and also from CIMOM server to ESS.
               Once you are satisfied that you are able to authenticate and receive ESS CLI heartbeat
               with all the ESS successfully, you may proceed for entering ESS IP addresses.

            If CIMOM agent fails to authenticate with ESSs, then it will not start-up properly and may
            be very slow, since it re-tries the authentication.




                                                           Chapter 5. CIMOM install and configuration   213
Figure 5-26 The addess command example


5.6.3 Register ESS server for Copy services
              Type the following command for each ESS server that is configured for Copy Services:
                  addesserver <ip> <user> <password>
                  – Where <ip> represents the IP address of the Enterprise Storage Server
                  – <user> represents the Enterprise Storage Server Specialist user name
                  – <password> represents the Enterprise Storage Server Specialist password for the user
                    name
                  Repeat the previous step for each additional ESS device that you want to configure.
                  Close the setdevice interactive session by typing exit.
                  Once you have defined all the ESS servers, you must stop and restart the CIMOM to
                  make the CIMOM initialize the information for the ESS servers.

                Note: CIMOM collects and caches the information from the defined ESS servers at startup
                time, the starting of the CIMOM might take a longer period of time the next time you start it.



                Attention: If the username and password entered is incorrect or the ESS CIM agent does
                not connect to the ESS this will cause a error and the ESS CIM Agent will not start and
                stop correctly, use following command to remove the ESS entry that is causing the problem
                and reboot the server.
                    – rmessserver <ip>

                Whenever you add or remove an ESS from CIMOM registration, you must re-start the
                CIMOM to pick up updated ESS device list.




214   IBM TotalStorage Productivity Center V2.3: Getting Started
5.6.4 Restart the CIMOM
          Perform these steps to use the Windows Start Menu facility to stop and restart the CIMOM.
          This is required so that CIMOM can register new devices or un-register deleted devices:
             Stop the CIMOM by selecting Start → Programs → CIM Agent for the IBM
             TotalStorage DS Open API → Stop CIMOM service. A Command Prompt window
             opens to track the stoppage of the CIMOM (as shown in Figure 5-27). If the CIMOM has
             stopped successfully, the following message is displayed:




          Figure 5-27 Stop ESS CIM Agent

             Restart the CIMOM by selecting Start → Programs → CIM Agent for the IBM
             TotalStorage DS Open API → Start CIMOM service. A Command Prompt window
             opens to track the progress of the starting of the CIMOM. If the CIMOM has started
             successfully, the message shown in Figure 5-28 is displayed.




          Figure 5-28 Restart ESS CIM Agent


           Note: The restarting of the CIMOM may take a while because it is connecting to the
           defined ESS servers and is caching that information for future use.


5.6.5 CIMOM user authentication
          Use the setuser interactive tool to configure the CIMOM for the users who will have the
          authority to use the CIMOM. The user is the TotalStorage Productivity Center for Disk and
          Replication superuser.

           Important: A TotalStorage Productivity Center for Disk and Replication superuserid and
           password must be create. This userid is initially used to by TotalStorage Productivity
           Center to connect to the CIM Agent. It is easiest if this superuserid is used for all CIM
           Agents. It can be set individually for each CIM Agent if necessary. This user ID should be
           less than or equal to eight characters.

           Upon installation of the CIM Agent for ESS, the provided default user name is “superuser”
           with a default password of “passw0rd”. The first time you use the setuser tool, you must use
           this user name and password combination. Once you have defined other user names, you
           can start the setuser command by specifying other defined CIMOM user names.



                                                         Chapter 5. CIMOM install and configuration   215
Note: The users which you configure to have authority to use the CIMOM are uniquely
                defined to the CIMOM software and have no required relationship to operating system user
                names, the ESS Specialist user names, or the ESS Copy Services user names.

                  Here is the procedure:
                  – Open a Command Prompt window and change directory to the ESS CIM Agent
                    directory, for example “C:Program FilesIBMcimagent”.
                  – Type the command setuser -u superuser -p passw0rd at the command prompt to
                    start the setuser interactive session to identify users to the CIMOM.
                  – Type the command adduser cimuser cimpass in the setuser interactive session to
                    define new users.
                      •   where cimuser represents the new user name to access the ESS CIM Agent
                          CIMOM
                      •   cimpass represents the password for the new user name to access the ESS CIM
                          Agent CIMOM
                  Close the setdevice interactive session by typing exit.

              For our ITSO Lab setup we used TPCCIMOM as superuser and TPCCIMOM as the
              password.



5.7 Verifying connection to the ESS
              During this task the ESS CIM Agent software connectivity to the Enterprise Storage Server
              (ESS) is verified.

              The connection to the ESS is through the ESS CLI software.

              If the network connectivity fails or if the user name and password that you set in the
              configuration task is incorrect, the ESS CIM Agent cannot connect successfully to the ESS.

              The installation, verification, and configuration of the ESS CIM Agent must be completed
              before you verify the connection to the ESS.
                  Verify that you have network connectivity to the ESS from the system where the ESS CIM
                  Agent is installed. Issue a ping command to the ESS and check that you can see reply
                  statistics from the ESS IP address.
                  Verify that the SLP is active by selecting Start → Settings → Control Panel.
                  Double-click the Administrative Tools icon. Double-Click the Services icon. You should
                  see similar to Figure 5-23 on page 211. Ensure that Status is Started.
                  Verify that the CIMOM is active by selecting Start → Settings → Control Panel →
                  Administrative Tools → Services. Launch Services panel and select IBM CIM Object
                  Manager service. Verify the Status is shown as Started, as shown in Figure 5-29 on
                  page 217.




216   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 5-29 Verify ESS CIMOM has started

   Verify that SLP has dependency on CIMOM, this is automatically configured when you
   installed the CIM agent software. Verify this by selecting Start → Settings → Control
   Panel. Double-click the Administrative Tools icon. Double-Click the Services icon and
   subsequently select properties on Service Location Protocol as shown in Figure
   Figure 5-30.




Figure 5-30 SLP properties panel

   Click Properties and select the Dependencies tab as shown in Figure 5-31 on page 218.
   You must ensure that IBM CIM Object Manager has a dependency on Service Location
   Protocol (this should be the case by default).




                                             Chapter 5. CIMOM install and configuration   217
Figure 5-31 SLP dependency on CIMOM

                  Verify CIMOM registration with SLP by selecting Start →Programs →CIM Agent for the
                  IBM TotalStorage DS Open API → Check CIMOM Registration. A window opens
                  displaying the WBEM services as shown in Figure 5-32. These services have either
                  registered themselves with SLP or you explicitly registered them with SLP using slptool.
                  If you changed the default ports for a CIMOM during installation, the port number should
                  be correctly listed here. It may take some time for a CIM Agent to register with SLP.




              Figure 5-32 Verify CIM Agent registration with SLP


                Note: If the verification of the CIMOM registration is not successful, stop and restart the
                SLP and CIMOM services. Note that the ESS CIMOM will attempt to contact each ESS
                registered to it. Therefore, the startup may take some time, especially if it is not able to
                connect and authenticate to any of the registered ESSs.

              Use the verifyconfig -u superuser -p passw0rd command, where superuser is the user
              name and passw0rd is the password for the user name that you configured to manage the
              CIMOM, to locate all WBEM services in the local network. You need to define the
              TotalStorage Productivity Center for Disk superuser name and passw0rd in order for
              TotalStorage Productivity Center for Disk to have the authority to manage the CIMOM. The
              verifyconfig command checks the registration for the ESS CIM Agent and checks that it can



218   IBM TotalStorage Productivity Center V2.3: Getting Started
connect to the ESSs. At ITSO Lab we had configured two ESSs (as shown in Figure 5-33 on
          page 219).




          Figure 5-33 The verifyconfig command


5.7.1 Problem determination
          You might run into the some errors. If that is the case, you may verify with the cimom.log file.

          This file is located in C:Program FilesIBMcimagent directory. You may verify that you
          have the entries with your current install timestamp as shown in Figure 5-34. The entries of
          specific interest are:
             CMMOM050OI Registered service service:wbem:https://x.x.x.x:5989 with SLP SA
             CMMOM0409I Server waiting for connections

          This first entry indicates that the CIMOM has successfully registered with SLP using the port
          number specified at ESS CIM agent install time, and the second entry indicates that it has
          started successfully and waiting for connections.




          Figure 5-34 CIMOM Log Success


                                                          Chapter 5. CIMOM install and configuration   219
If you still have problems, Refer to the DS Open Application Programming Interface
                  Reference for an explanation and resolution of the error messages. You can find this
                  Guide in the doc directory at the root of the CIM Agent CD.


5.7.2 Confirming the ESS CIMOM is available
              Before you proceed, you need to be sure that the DS CIMOM is listening for incoming
              connections. To do this run a telnet command from the server where TotalStorage
              Productivity Center for Disk resides. A successful telnet on the configured port (as indicated
              by a black screen with cursor on the top left) will tell you that the DS CIMOM is active. You
              selected this port during DS CIMOM code installation. If the telnet connection fails, you will
              have a panel like the one shown in Figure 5-35. In such case, you have to investigate the
              problem until you get a blank screen for telnet port.




              Figure 5-35 Example of telnet fail connection

              Another method to verify that DS CIMOM is up and running is to use the CIM Browser
              interface. For Windows machines change the working directory to c:Program
              Filesibmcimagent and run startcimbrowser. The WBEM browser in Figure 5-36 will
              appear. The default user name is superuser and the default password is passw0rd. If you
              have already changed it, using the setuser command, the new userid and password must be
              provided. This should be set to the TotalStorage Productivity Center for Disk userid and
              password.




              Figure 5-36 WBEM Browser


220   IBM TotalStorage Productivity Center V2.3: Getting Started
When login is successful, you should see a panel like the one in Figure 5-37.




           Figure 5-37 CIMOM Browser window


5.7.3 Setting up the Service Location Protocol Directory Agent
           You can use the following procedure to set up the Service Location Protocol (SLP) Directory
           Agent (DA) so that TotalStorage Productivity Center for Disk can discover devices that reside
           in subnets other than the one in which TotalStorage Productivity Center for Disk resides.

           Perform the following steps to set up the SLP DAs:
           1. Identify the various subnets that contain devices that you want TotalStorage Productivity
              Center for Disk to discover.
           2. Each device is associated with a CIM Agent. There might be multiple CIM Agents for each
              of the identified subnets. Pick one of the CIM Agents for each of the identified subnets. (It
              is possible to pick more than one CIM Agent per subnet, but it is not necessary for
              discovery purposes.)
           3. Each of the identified CIM Agents contains an SLP service agent (SA), which runs as a
              daemon process. Each of these SAs is configured using a configuration file named
              slp.conf. Perform the following steps to edit the file:
              – For example, if you have DS CIM agent installed in the default install directory path,
                then go to the C:Program FilesIBMcimagentslp directory.
              – Look for file named slp.conf.
              – Make a backup copy of this file and name it slp.conf.bak.




                                                           Chapter 5. CIMOM install and configuration   221
– Open the slp.conf file and scroll down until you find (or search for) the line
                      ;net.slp.isDA = true
                    Remove the semi-colon (;) at the beginning of the line. Ensure that this property is set
                    to true (= true) rather than false. Save the file.
                  – Copy this file (or replace it if the file already exists) to the main windows subdirectory
                    for Windows machines (for example c:winnt), or in the /etc directory for UNIX
                    machines.
              4. It is recommended to reboot the SLP server at this stage. Otherwise, alternatively, you
                 may choose to restart the SLP and CIMOM services. You can do this from your Windows
                 desktop → Start Menu → Settings → Control Panel → Administrative tools → Services.
                 Launch the Services GUI → Locate the Service Location Protocol, right click and select
                 stop. It will pop-up another panel which will request to stop IBM CIM Object Manager
                 service. You may click Yes.
                  You may start the SLP daemon again after it has stopped successfully.

                  Alternatively, you may choose to re-start the CIMOM using command line as described in
                  “Restart the CIMOM” on page 215


              Creating slp.reg file

                Important: To avoid to register manually the CIMOM outside the subnet every time that the
                Service Location Protocol (SLP) is restarted, create a file named slp.reg.

                The default location for the registration is “C:winnt”.

                slpd reads the slp.reg file on startup and re-reads it when ever the SIGHUP signal is
                received.

              slp.reg file example
              Example 5-1 is a slp.reg file sample.

              Example 5-1 slp.reg file
              #############################################################################
              #
              # OpenSLP static registration file
              #
              # Format and contents conform to specification in IETF RFC 2614, see also
              # http://www.openslp.org/doc/html/UsersGuide/SlpReg.html
              #
              #############################################################################


              #----------------------------------------------------------------------------
              # Register Service - SVC CIMOMS
              #----------------------------------------------------------------------------


              service:wbem:https://9.43.226.237:5989,en,65535
              # use default scopes: scopes=test1,test2
              description=SVC CIMOM Open Systems Lab, Cottle Road
              authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
              creation_date=04/02/20

              service:wbem:https://9.11.209.188:5989,en,65535
              # use default scopes: scopes=test1,test2


222   IBM TotalStorage Productivity Center V2.3: Getting Started
description=SVC CIMOM Tucson L2 Lab
           authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
           creation_date=04/02/20

           #service:wbem:https://9.42.164.175:5989,en,65535
           # use default scopes: scopes=test1,test2
           #description=SVC CIMOM Raleigh SAN Central
           #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
           #creation_date=04/02/20

           #----------------------------------------------------------------------------
           # Register Service - SANFS CIMOMS
           #----------------------------------------------------------------------------

           #service:wbem:https://9.82.24.66:5989,en,65535
           #Additional parameters for setting the appropriate namespace values
           #CIM_InteropSchemaNamespace=root/cimv2
           #Namespace=root/cimv2
           # use default scopes: scopes=test1,test2
           #description=SANFS CIMOM Gaithersburg ATS Lab
           #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
           #creation_date=04/02/20

           #service:wbem:https://9.11.209.148:5989,en,65535
           #Additional parameters for setting the appropriate namespace values
           #CIM_InteropSchemaNamespace=root/cimv2
           #Namespace=root/cimv2
           # use default scopes: scopes=test1,test2
           #description=SANFS CIMOM Tucson L2 Lab
           #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
           #creation_date=04/02/20

           #----------------------------------------------------------------------------
           # Register Service - FAStT CIMOM
           #----------------------------------------------------------------------------

           #service:wbem:https://9.1.39.65:5989,en,65535
           #CIM_InteropSchemaNamespace=root/lsissi
           #ProtocolVersion=0
           #Namespace=root/lsissi
           # use default scopes: scopes=test1,test2
           #description=FAStT700 CIMOM ITSO Lab, Almaden
           #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
           #creation_date=04/02/20



5.7.4 Configuring TotalStorage Productivity Center for SLP discovery
           You can use this panel to enter a list of DA addresses. TotalStorage Productivity Center for
           Disk sends unicast service requests to each of these statically configured DAs, and sends
           multicast service requests on the local subnet on which TotalStorage Productivity Center for
           Disk is installed. Configure an SLP DA by changing the configuration of the SLP service
           agent (SA) that is included as part of an existing CIM Agent installation. This causes the
           program that normally runs as an SLP SA to run as an SLP DA.




                                                          Chapter 5. CIMOM install and configuration   223
You have now converted the SLP SA of the CIM Agent to run as an SLP DA. The CIMOM is
              not affected and will register itself with the DA instead of the SA. However, the DA will
              automatically discover all other services registered with other SLP SAs in that subnet.

                Attention: You will need to register the IP address of the server running the SLP DA
                daemon with the IBM Director to facilitate MDM SLP discovery. You can do this using the
                IBM Director console interface of TotalStorage Productivity Center for Disk. The procedure
                to register the IP address is described in 6.2, “SLP DA definition” on page 248.


5.7.5 Registering the DS CIM Agent to SLP
              You need to manually register the DS CIM agent to the SLP DA only when the following
              conditions are both true:
                  There is no DS CIM Agent in the TotalStorage Productivity Center for Disk server subnet
                  (TotalStorage Productivity Center for Disk).
                  The SLP DA used by Multiple Device Manager is also not running an DS CIM Agent.

                Tip: If either of the preceding conditions are false, you do not need to perform the following
                steps.

              To register the DS CIM Agent issue the following command on the SLP DA server:
                  C:>CD C:Program FilesIBMcimagentslp
                  slptool register service:wbem:https://ipaddress:port

              Where ipaddress is the ESS CIM Agent ip address. For our ITSO setup, we used IP address
              of our ESS CIMOM server as 9.1.38.48 and default port number 5989. Issue a verifyconfig
              command as shown in Figure 5-33 on page 219 to confirm that SLP is aware of the
              registration.

                Attention: Whenever you update SLP configuration as shown above, you may have to
                stop and start the slpd daemon. This will enable SLP to register and listen on newly
                configured ports.

                Also, whenever you re-start SLP daemon, ensure that IBM DS CIMOM agent has also
                re-started. Otherwise you may issue startcimom.bat command, as shown in previous
                steps. Another alternative is to reboot the CIMOM server. Please note the for DS CIMOM
                startup takes longer time.


5.7.6 Verifying and managing CIMOM’s availability
              You may now verify that TotalStorage Productivity Center for Disk can authenticate and
              discover the CIMOM agent services which are registered to the SLP DA. See “Verifying and
              managing CIMOMs availability” on page 256.




224   IBM TotalStorage Productivity Center V2.3: Getting Started
5.8 Installing CIM agent for IBM DS4000 family
         The latest code for the IBM DS4000 family is available at the IBM support Web site. You need
         to download the correct and supported level of CIMOM code for TotalStorage Productivity
         Center for Disk Version 2.3.

         You can navigate from the following IBM support Web site for TotalStorage Productivity
         Center for Disk to acquire the correct CIMOM code:
            http://www-1.ibm.com/servers/storage/support/software/tpcdisk/

         You may to have traverse through multiple links to get to the download files. At the time of
         writing this book, we go to the Web page shown in Figure 5-38.




         Figure 5-38 IBM support matrix Web page




                                                         Chapter 5. CIMOM install and configuration     225
By scrolling down the same Web page, we go to the following link for DS 4000 CIMOM code
              as shown in Figure 5-39. This link leads to the Engenio Provider site. The current supported
              code level is 1.0.59, as indicated in the Web page.




              Figure 5-39 Web download link for DS Family CIMOM code

              From the Web site, select the operating system used for the server on which the IBM DS
              family CIM Agent will be installed. You will download a setup.exe file. Save it to a directory on
              the server on which you will be installing the DS 4000 CIM Agent (see Figure 5-40 on
              page 227).




226   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 5-40 DS CIMOM Install

Launch the setup.exe file to begin the DS 4000 family CIM agent installation. The
InstallShield Wizard for LSI SMI-S Provider window opens (see Figure 5-41). Click Next to
continue.




Figure 5-41 LSI SMI-SProvider window




                                              Chapter 5. CIMOM install and configuration   227
The LSI License Agreement window opens next. If you agree with the terms of the license
              agreement, click Yes to accept the terms and continue the installation (see Figure 5-42).




              Figure 5-42 LSI License Agreement

              The LSI System Info window opens. The minimum requirements are listed along with the
              install system disk free space and memory attributes as shown in Figure 5-43. If the install
              system fails the minimum requirements evaluation, then a notification window will appear and
              the installation will fail. Click Next to continue.




              Figure 5-43 System Info window




228   IBM TotalStorage Productivity Center V2.3: Getting Started
The Choose Destination Location window appears. Click Browse to choose another location
or click Next to begin the installation of the FAStT CIM agent (see Figure 5-44).




Figure 5-44 Choose a destination

The InstallShield Wizard will now prepare and copy the files into the destination directory.
See Figure 5-45.




Figure 5-45 Install Preparation window




                                                Chapter 5. CIMOM install and configuration     229
The README appears after the files have been installed. Read through it to become familiar
              with the most current information (see Figure 5-46). Click Next when ready to continue.




              Figure 5-46 README file

              In the Enter IPs and/or Hostnames window, enter the IP addresses and hostnames of the
              FAStT devices that this FAStT CIM agent will manage as shown in Figure 5-47.




              Figure 5-47 FAStT device list




230   IBM TotalStorage Productivity Center V2.3: Getting Started
Use the Add New Entry button to add the IP addresses or hostnames of the FAStT devices
that this FAStT CIM agent will communicate with. Enter one IP address or hostname at a time
until all the FAStT devices have been entered and click Next (see Figure 5-48).




Figure 5-48 Enter hostname or IP address

Do not enter the IP address of a FAStT device in multiple FAStT CIM Agents within the same
subnet. This may cause unpredictable results on the TotalStorage Productivity Center for
Disk server and could cause a loss of communication with the FAStT devices.

If the list of hostnames or IP addresses has been previously written to a file, use the Add File
Contents button, which will open the Windows Explorer. Locate and select the file and then
click Open to import the file contents.

When all the FAStT device hostnames and IP addresses have been entered, click Next to
start the SMI-S Provider Service (see Figure 5-49).




Figure 5-49 Provider Service starting




                                                Chapter 5. CIMOM install and configuration   231
When the Service has started, the installation of the FAStT CIM agent is complete (see
              Figure 5-50).




              Figure 5-50 Installation complete


              Arrayhosts file
              The installer will create a file called
                  – %installroot%SMI-SProviderwbemservicescimombinarrayhosts.txt

              The arrayhosts file is shown in Figure 5-51. In this file the IP addresses of installed DS 4000
              units can be reviewed, added, or edited.




              Figure 5-51 Arrayhosts file


              Verifying LSI Provider Service availability
              You can verify from Windows Services Panel that the LSI Provider service has started as
              shown in Figure 5-52 on page 233. If you change the contents of the arrayhost file for adding
              and deleting DS 4000 devices, then you will need to restart the LSI Provider service using the
              Windows Services Panel.




232   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 5-52 LSI Provider Service


           Registering DS4000 CIM agent
           The DS4000 CIM Agent needs to be registered with an SLP DA if the FAStT CIM Agent is in
           a different subnet then that of IBM TotalStorage Productivity Center for Disk and Replication
           Base environment. The registration is not currently provided automatically by the CIM Agent.
           You register the DS 4000 CIM Agent with SLP DA from a command prompt using the slptool
           command.

           An example of the slptool command is shown below. You must change the IP address to
           reflect the IP address of the workstation or server where you installed the DS 4000 family DS
           4000 CIM Agent. The IP address of our FAStT CIM Agent is 9.1.38.79 and port 5988. You
           need to execute this command on your SLP DA server. It our ITSO lab, we used SLP DA on
           the ESS CIMOM server. You need to go to directory C:Program FilesIBMcimagentslp
           and run:
              slptool register service:wbem:http:9.1.38.79:5988

            Important: You cannot have the FAStT management password set if you are using IBM
            TotalStorage Productivity Center.

           At this point you may run following command on the SLP DA server to verify that DS 4000
           family FAStT CIM agent is registered with SLP DA.
              slptool findsrvs wbem

           The response from this command will show the available services which you may verify.


5.8.1 Verifying and Managing CIMOM availability
           You may now verify that TotalStorage Productivity Center for Disk can authenticate and
           discover the CIMOM agent services which are registered by SLP DA. You can proceed to
           your TotalStorage Productivity Center for Disk server. See “Verifying and managing CIMOMs
           availability” on page 256.




                                                          Chapter 5. CIMOM install and configuration   233
5.9 Configuring CIMOM for SAN Volume Controller
              The CIM Agent for SAN Volume Controller is part of the SAN Volume Controller Console and
              provides the TotalStorage Productivity Center for Disk with access to SAN Volume Controller
              clusters. You must customize the CIM Agents in your enterprise to accept the TotalStorage
              Productivity Center for Disk user name and password. Figure 5-53 explains the
              communication between the TotalStorage Productivity Center for Disk and SAN Volume
              Controller Environment.




              Figure 5-53 TotalStorage Productivity Center for Disk and SVC communication

              For additional details on how to configure the SAN Volume Controller Console, refer to the
              redbook IBM TotalStorage Introducing the SAN Volume Controller and SAN Integration
              Server, SG24-6423.

              To discover and manage the SAN Volume Controller, we need to ensure that our TotalStorage
              Productivity Center for Disk superuser name and password (the account we specify in the
              TotalStorage Productivity Center for Disk configuration panel, as shown in 5.9.1, “Adding the
              SVC TotalStorage Productivity Center for Disk user account” on page 235) matches an
              account defined on the SAN Volume Controller console. In our case we implemented
              username TPCSUID and password ITSOSJ.

              You may want to adapt a similar nomenclature and set up the username and password on
              each SAN Volume Controller CIMOM to be monitored with TotalStorage Productivity Center
              for Disk.




234   IBM TotalStorage Productivity Center V2.3: Getting Started
5.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account
           As stated previously, you should implement a unique userid to manage the SAN Volume
           Controller devices in TotalStorage Productivity Center for Disk. This can be achieved at the
           SAN Volume Controller console using the following steps:
           1. Login to the SAN Volume Controller console with a superuser account
           2. Click Users under My Work on the left side of the panel (see Figure 5-54).




           Figure 5-54 SAN Volume Controller console




                                                          Chapter 5. CIMOM install and configuration   235
3. Select Add a user in the drop down under Users panel and click Go (see Figure 5-55).




              Figure 5-55 SAN Volume Controller console Add a user




236   IBM TotalStorage Productivity Center V2.3: Getting Started
4. An introduction screen is opened, click Next (see Figure 5-56).




Figure 5-56 SAN Volume Controller Add a user wizard




                                               Chapter 5. CIMOM install and configuration   237
5. Enter the User Name and Password and click Next (see Figure 5-57).




              Figure 5-57 SAN Volume Controller Console Define users panel

              6. Select your candidate cluster and move it to the right under Administrator Clusters (see
                 Figure 5-58). Click Next to continue.




              Figure 5-58 SAN Volume Controller console Assign administrator roles




238   IBM TotalStorage Productivity Center V2.3: Getting Started
7. Click Next after you Assign service roles (see Figure 5-59).




Figure 5-59 SAN Volume Controller Console Assign user roles




                                                Chapter 5. CIMOM install and configuration   239
8. Click Finish after you verify user roles (see Figure 5-60).




              Figure 5-60 SAN Volume Controller Console Verify user roles

              9. After you click Finish, the Viewing users panel opens (see Figure 5-61).




              Figure 5-61 SAN Volume Controller Console Viewing Users


240   IBM TotalStorage Productivity Center V2.3: Getting Started
Confirming that the SAN Volume Controller CIMOM is available
           Before you proceed, you need to be sure that the CIMOM on the SAN Volume Controller is
           listening for incoming connections. To do this, issue a telnet command from the server
           where TotalStorage Productivity Center for Disk resides. A successful telnet on port 5989 (as
           indicated by a black screen with cursor on the top left) will tell you that the CIMOM SAN
           Volume Controller console is active. If the telnet connection fails, you will have a panel like
           the one in Figure 5-62.




           Figure 5-62 Example of telnet fail connection


5.9.2 Registering the SAN Volume Controller host in SLP
           The next step to detecting an SAN Volume Controller is to manually register the SAN Volume
           Controller console to the SLP DA.

            Tip: If your SAN Volume Controller console resides in the same subnet as the
            TotalStorage Productivity Center server, SLP registration will be automatic so you do not
            need to perform the following step.

           To register the SAN Volume Controller Console perform the following command on the SLP
           DA server:
              slptool register service:wbem:https://ipaddress:5989
              Where ipaddress is the SAN Volume Controller console ip address.

           Run a verifyconfig command to confirm that SLP ia aware of the SAN Volume Controller
           console registration.



5.10 Configuring CIMOM for TotalStorage Productivity Center
     for Disk summary
           The TotalStorage Productivity Center discovers both IBM storage devices that comply with
           the Storage Management Initiative Specification (SMI-S) and SAN devices such as switches,
           ports, and hosts. SMIS-compliant storage devices are discovered using the Service Location
           Protocol (SLP).




                                                           Chapter 5. CIMOM install and configuration   241
The TotalStorage Productivity Center server software performs SLP discovery on the
              network. The User Agent looks for all registered services with a service type of service:wbem.
              The TotalStorage Productivity Center performs the following discovery tasks:
                  Locates individual storage devices
                  Retrieves vital characteristics for those storage devices
                  Populates The TotalStorage Productivity Center internal databases with the discovered
                  information

              The TotalStorage Productivity Center can also access storage devices through the CIM
              Agent software. Each CIM Agent can control one or more storage devices. After the CIMOM
              services have been discovered through SLP, the TotalStorage Productivity Center contacts
              each of the CIMOMs directly to retrieve the list of storage devices controlled by each CIMOM.
              TotalStorage Productivity Center gathers the vital characteristics of each of these devices.

              For the TotalStorage Productivity Center to successfully communicate with the CIMOMs, the
              following conditions must be met:
                  A common user name and password must be configured for all the CIM Agent instances
                  that are associated with storage devices that are discoverable by TotalStorage
                  Productivity Center (use adduser as described in 5.6.5, “CIMOM user authentication” on
                  page 215). That same user name and password must also be configured for TotalStorage
                  Productivity Center using the Configure MDM task in the TotalStorage Productivity Center
                  interface. If a CIMOM is not configured with the matching user name and password, it will
                  be impossible to determine which devices the CIMOM supports. As a result, no devices
                  for that CIMOM will appear in the IBM Director Group Content pane.
                  The CIMOM service must be accessible through the IP network. The TCP/IP network
                  configuration on the host where TotalStorage Productivity Center is installed must include
                  in its list of domain names all the domains that contain storage devices that are
                  discoverable by the TotalStorage Productivity Center.

              It is important to verify that CIMOM is up and running. To do that, use the following command
              from TotalStorage Productivity Center server:
                  telnet CIMip port

              Where: CIMip is the ip address where CIM Agent run and port is the port value used for the
              communication (5989 for secure connection, 5988 for unsecure connection).


5.10.1 SLP registration and slptool
              TotalStorage Productivity Center for Disk uses Service Location Protocol (SLP) discovery,
              which requires that all of the CIMOMs that TotalStorage Productivity Center for Disk
              discovers are registered using the Service Location Protocol (SLP).

              SLP can only discover CIMOMs that are registered in its IP subnet. For CIMOMs outside of
              the IP subnet, you need to use an SLP DA and register the CIMOM using slptool. Ensure
              that the CIM_InteropSchemaNamespace and Namespace attributes are specified.

              For example, type the following command:
                  slptool register service:wbem:https://myhost.com:port

              Where: myhost.com is the name of the server hosting the CIMOM, and port is the port
              number of the service, such as 5989.




242   IBM TotalStorage Productivity Center V2.3: Getting Started
5.10.2 Persistency of SLP registration
            Although it is acceptable to register services manually into SLP, it is possible for SLP users to
            to statically register legacy services (applications that were not compiled to use the SLP
            library) using a configuration file that SLP reads at startup, called slp.reg.

            All of the registrations are maintained by slpd and will remain registered as long as slpd is
            alive. The Service Location Protocol (SLP) registration is lost if the server where SLP resides
            is rebooted or when the Service Location Protocol (SLP) service is stopped.

            A Service Location Protocol (SLP) manual registration is needed for all the CIMOMs outside
            the subnet where SLP DA resides.

             Important: To avoid to register manually the CIMOM outside the subnet every time that
             the Service Location Protocol (SLP) is restarted, create a file named slp.reg.

             The default location for the registration is for Windows machines “c:winnt”, or “/etc”
             directory for UNIX machines.

             slpd reads the slp.reg file on startup and re-reads it when ever the SIGHUP signal is
             received


5.10.3 Configuring slp.reg file
            Example 5-2 shows a typical slp.reg file:

            Example 5-2 An slp.reg file
            #############################################################################
            #
            # OpenSLP static registration file
            #
            # Format and contents conform to specification in IETF RFC 2614, see also
            # http://www.openslp.org/doc/html/UsersGuide/SlpReg.html
            #
            #############################################################################


            #----------------------------------------------------------------------------
            # Register Service - SVC CIMOMS
            #----------------------------------------------------------------------------

            service:wbem:https://9.43.226.237:5989,en,65535
            # use default scopes: scopes=test1,test2
            description=SVC CIMOM Open Systems Lab, Cottle Road
            authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
            creation_date=04/02/20

            service:wbem:https://9.11.209.188:5989,en,65535
            # use default scopes: scopes=test1,test2
            description=SVC CIMOM Tucson L2 Lab
            authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
            creation_date=04/02/20

            #service:wbem:https://9.42.164.175:5989,en,65535
            # use default scopes: scopes=test1,test2
            #description=SVC CIMOM Raleigh SAN Central
            #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
            #creation_date=04/02/20

                                                             Chapter 5. CIMOM install and configuration   243
#----------------------------------------------------------------------------
              # Register Service - SANFS CIMOMS
              #----------------------------------------------------------------------------

              #service:wbem:https://9.82.24.66:5989,en,65535
              #Additional parameters for setting the appropriate namespace values
              #CIM_InteropSchemaNamespace=root/cimv2
              #Namespace=root/cimv2
              # use default scopes: scopes=test1,test2
              #description=SANFS CIMOM Gaithersburg ATS Lab
              #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
              #creation_date=04/02/20

              #service:wbem:https://9.11.209.148:5989,en,65535
              #Additional parameters for setting the appropriate namespace values
              #CIM_InteropSchemaNamespace=root/cimv2
              #Namespace=root/cimv2
              # use default scopes: scopes=test1,test2
              #description=SANFS CIMOM Tucson L2 Lab
              #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
              #creation_date=04/02/20

              #----------------------------------------------------------------------------
              # Register Service - FAStT CIMOM
              #----------------------------------------------------------------------------

              #service:wbem:https://9.1.39.65:5989,en,65535
              #CIM_InteropSchemaNamespace=root/lsissi
              #ProtocolVersion=0
              #Namespace=root/lsissi
              # use default scopes: scopes=test1,test2
              #description=FAStT700 CIMOM ITSO Lab, Almaden
              #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini
              #creation_date=04/02/20




244   IBM TotalStorage Productivity Center V2.3: Getting Started
Part 3


Part       3     Configuring the
                 IBM TotalStorage
                 Productivity Center
                 In this part of the book we provide information about customizing the components of the
                 IBM TotalStorage Productivity Center product suite, for the following components:
                     IBM TotalStorage Productivity Center for Disk
                     IBM TotalStorage Productivity Center for Replication
                     IBM TotalStorage Productivity Center for Fabric
                     IBM TotalStorage Productivity Center for Data

                 We also include a chapter on how to set up the individual (sub) agents on a managed host.




© Copyright IBM Corp. 2005. All rights reserved.                                                           245
246   IBM TotalStorage Productivity Center V2.3: Getting Started
6


    Chapter 6.   Configuring IBM TotalStorage
                 Productivity Center for Disk
                 This chapter provides information about the basic tasks that you need to complete after you
                 install IBM TotalStorage Productivity Center for Disk:
                     Define SLP DA servers to IBM TotalStorage Productivity Center for Disk
                     Discover CIM Agents
                     Configure CIM Agents to IBM TotalStorage Productivity Center for Disk
                     Discover Storage devices
                     Install the remote GUI




© Copyright IBM Corp. 2005. All rights reserved.                                                         247
6.1 Productivity Center for Disk Discovery summary
              Productivity Center for Disk discovers both IBM storage devices that comply with the SMI-S
              and SAN devices such as switches, ports, and hosts. SMIS-compliant storage devices are
              discovered using the SLP. The Productivity Center for Disk server software performs SLP
              discovery in the network. The User Agent looks for all registered services with a service type
              of service:wbem. Productivity Center for Disk performs the following discovery tasks:
                  Locates individual CIM Agents
                  Locates individual storage devices
                  Retrieves vital characteristics for those storage devices
                  Populates the internal Productivity Center for Disk databases with the discovered
                  information

              Productivity Center for Disk can also access storage devices through the CIM Agent
              software. Each CIM Agent can control one or more storage devices. After the CIMOM
              services are discovered through SLP, Productivity Center for Disk contacts each of the
              CIMOMs directly to retrieve the list of storage devices controlled by each CIMOM.
              Productivity Center for Disk gathers the vital characteristics of each of these devices.

              For Productivity Center for Disk to successfully communicate with the CIMOMs, you must
              meet the following conditions:
                  A common user name (superuser) and password must be set during installation of the
                  IBM TotalStorage Productivity Center for Disk base. This user name and password can be
                  changed using the Configure MDM task in the Productivity Center for Disk interface. If a
                  CIMOM is not configured with the matching user name and password, then you must
                  configure each CIMOM with the correct userid and password using the panel shown in
                  Figure 6-16 on page 258. We recommend that the common user name and password be
                  used for each CIMOM.
                  The CIMOM service must be accessible through the IP network. The TCP/IP network
                  configuration on the host, where Productivity Center for Disk is installed, must include in
                  its list of domain names all the domains that contain storage devices that are discoverable
                  by Productivity Center for Disk.

              It is important to verify that CIMOM is up and running. To do that, use the following command:
                  telnet CIMip port

              Here, CIMip is the IP address where the CIM Agent runs, and port is the port value used for
              the communication (5989 for a secure connection; 5988 for an unsecure connection).



6.2 SLP DA definition
              Productivity Center for Disk can discover CIM Agents on the same subnet through SLP
              without any additional configuration. SLP DA should be set up on each subnet as described in
              5.7.3, “Setting up the Service Location Protocol Directory Agent” on page 221. The SLP DA
              can then be defined to Productivity Center for Disk using the panel located at Options →
              Discovery Preferences → MDM SLP Configuration as shown in Figure 6-1 on page 249.
              Enter the IP address of the server with the SLP DA into the SLP directory agent host box and
              click Add.




248   IBM TotalStorage Productivity Center V2.3: Getting Started
We are assuming that you have followed the steps outlined in Chapter 5, “CIMOM install and
configuration” on page 191. You should complete the following tasks in order to discover
devices defined to our Productivity Center common base host. Make sure that:
   All CIM agents are running and are registered with the SLP server.
   The SLP agent host is defined in the IBM Director options (Figure 6-1) if it resides in a
   different subnet from that of the TotalStorage Productivity Center server (Options →
   Discovery Preferences → MDM SLP Configuration tab).

 Note: If the Productivity Center common base host server resides in the same subnet as
 the CIMOM, then it is not a requirement that the SLP DA host IP address be specified in
 the Discovery Preferences panel as shown in Figure 6-2. Refer to Chapter 2, “Key
 concepts” on page 27 for details on SLP discovery.

Here we provide a step-by-step procedure:
1. Discovery will happen automatically based on preferences that are defined in the Options
    → Discovery Preferences → MDM SLP Configuration tab. The default values for Auto
   discovery interval and Presence check interval is set to 0 (see Figure 6-1). These
   values should be set to a more suitable value, for example, to 1 hour for Auto discovery
   interval and 15 minutes for Presence check interval. The values you specify will have a
   performance impact on the CIMOMs and Productivity Center common base server, so do
   not set these values too low.




Figure 6-1 Setting discovery preferences




                         Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   249
Continue entering IP addresses for all SLP DA servers. Click OK when finished (see
              Figure 6-2).




              Figure 6-2 Discovery preference set

              2. Turn off automatic inventory on discovery.

                Important: Because of the time and CIMOM resources needed to perform inventory on
                storage devices, it is undesirable and unnecessary to perform this operation each time
                Productivity Center common base performs a device discovery.




250   IBM TotalStorage Productivity Center V2.3: Getting Started
Turn off automatic inventory by selecting Options → Server Preferences as shown in
Figure 6-3.




Figure 6-3 Selecting Server Preferences

Now uncheck the Collect On Discovery check box as shown in Figure 6-4, all other options
can remain unchanged. Select OK when done.




Figure 6-4 Server Preferences




                        Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   251
3. You can click Discover all Systems in the top left corner of the IBM Director Console to
                 initiate an immediate discovery task (see Figure 6-5).




              Figure 6-5 Discover All Systems icon




252   IBM TotalStorage Productivity Center V2.3: Getting Started
4. You can also use the IBM Director Scheduler to create a scheduled job for new device
   discovery.
   – Either click the scheduler icon in the IBM Director tool bar or use the menu, Tasks →
     Scheduler (see Figure 6-6).




Figure 6-6 Tasks Scheduler option for Discovery

   – In the Scheduler, click File → New Job (see Figure 6-7).




Figure 6-7 Task Scheduler Discovery job


                         Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   253
– Establish parameters for the new job. Under the Date/Time tab, include date and time
                    to perform the job, and whether the job is to be repeated (see Figure 6-8).




              Figure 6-8 Discover job parameters

                  – From the Task tab (see Figure 6-9), select Discover MDM storage devices/SAN
                    Elements, then click Select.




              Figure 6-9 Discover job selection task




254   IBM TotalStorage Productivity Center V2.3: Getting Started
– Click File → Save as, or use the Save as icon.
   – Provide a descriptive job name in the Save Job panel (see Figure 6-10) and click OK.




Figure 6-10 Discover task job name

Now run the discovery process by selecting Tasks → Discover Systems → All Systems
and Devices (Figure 6-11).




Figure 6-11 Perform discovery




                        Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   255
Double-click the Manage CIMOM task to see the status of the discovery (Figure 6-12).




              Figure 6-12 Configure CIMOMs

              The CIMOMs will appear in the list as they are discovered.


6.2.1 Verifying and managing CIMOMs availability
              You may now verify that TotalStorage Productivity Center for Disk can authenticate and
              discover the CIMOM agent services which are registered to the SLP DA.

              Launch the IBM Director Console and select TotalStorage Productivity Center for Disk →
              Manage CIMOMs in the tasks panel as shown in Figure 6-13. The panel shows the status of
              connection to the respective CIMOM servers. Our ITSO DS CIMOM server connection status
              is indicated in first line, with IP address 9.1.38.48, port 5996, and status as Success.




              Figure 6-13 Manage CIMOM panel


256   IBM TotalStorage Productivity Center V2.3: Getting Started
It should not be necessary to change any information if you followed the recommendation to
use the same superuser id and password for all CIMOMs. Select the CIMOM to be configured
and click Properties to configure a CIMOM (Figure 6-14).




Figure 6-14 Select a CIMOM to configure

1. In order to verify and re-confirm the connection, you may select the respective connection
   status and click Properties. Figure 6-16 on page 258 shows the properties panel. You
   may verify the information and update if necessary. The namespace, username and
   password are picked up automatically, hence they are not normally required to be entered
   manually. This username is used by CIMOM to logon to TotalStorage Productivity Center
   for Disk.
   If you have difficulty getting a successful connection, then you may manually enter
   namespace, username, and password. Update the properties panel and test the
   connection to the CIMOM:
   a. Enter the Namespace value. It is rootibm for the ESS, DS6000 and DS8000 It is
      interop for the DS4000.
   b. Select the protocol. It is typically https for ESS, DS6000 and DS8000. It is http for
      DS4000.
   c. Enter the User name and password. The default is the superuser password entered
      earlier. If you entered a different user name and password with the setuser command
      for the CIM agent, then enter that user name and password here.
   d. Click Test Connection to verify correct configuration.
   e. You should see the panel in Figure 6-15. Click Close on the panel.




Figure 6-15 Successful test of the connection to a CIMOM




                         Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   257
f. Click OK on the panel shown in Figure 6-16 to save the properties.




              Figure 6-16 CIMOM Properties

              2. After the connection to the CIMOM is successful, then perform discovery again as shown
                 before in Figure 6-11 on page 255. This will discover the storage devices connected to
                 each CIMOM (Figure 6-17).




              Figure 6-17 DS4000 CIMOM Properties Panel

              3. Click the Test Connection button to see a panel similar to Figure 6-15 on page 257,
                 showing that the connection is successful.

                Tip: If you move or delete CIMOMs in your environment, the old CIMOM entries are not
                automatically updated, and entries with a Failure status will be seen as in Figure 6-13 on
                page 256. These invalid entries can slow down discovery performance, as TotalStorage
                Productivity Center tries to contact them each time it performs a discovery.

                You cannot delete CIMOM entries directly from the Productivity Center common base
                interface. Delete them using the DB2 control center tool as described in 16.6, “Manually
                removing old CIMOM entries” on page 911.




258   IBM TotalStorage Productivity Center V2.3: Getting Started
6.3 Disk and Replication Manager remote GUI
        It is possible to install a TotalStorage Productivity Center for Disk console on a server other
        than the one on which the TotalStorage Productivity Center for Disk code is installed. This
        allows you to manage TotalStorage Productivity Center for Disk from a secondary location.
        Having a secondary TotalStorage Productivity Center for Disk console will offload workload
        from the TotalStorage Productivity Center for Disk server.

         Note: You are only installing the IBM Director and TotalStorage Productivity Center for
         Disk console code. You do not need to install any other code for the remote console.

        In our lab we installed the remote console on a dedicated Windows 2000 server with 2 GB
        RAM. You must install all the consoles and clients on the same server. Here are the steps:
        1. Install the IBM Director console.
        2. Install the TotalStorage Productivity Center for Disk console.
        3. Install the Performance Manager client if the Performance Manager component is
           installed.

        Installing the IBM Director console
        Follow these steps:
        1. Start the setup.exe of IBM Director.
        2. The main IBM Director window (Figure 6-18) opens. Click INSTALL IBM DIRECTOR.




        Figure 6-18 IBM Director installer




                                  Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   259
3. In the IBM Director Installation panel (Figure 6-19), select IBM Director Console
                 installation.




              Figure 6-19 Installation options for IBM Director

              4. After a moment, the InstallShield Wizard for IBM Director Console panel (Figure 6-20)
                 opens. Click Next.




              Figure 6-20 Welcome panel



260   IBM TotalStorage Productivity Center V2.3: Getting Started
5. In the License Agreement panel (Figure 6-21), select I accept the terms in the license
   agreement. Then click Next.




Figure 6-21 License Agreement

6. The next panel (Figure 6-22) contains information about enhancing IBM Director.
   Click Next to continue.




Figure 6-22 Enhance IBM Director




                       Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   261
7. The Feature and installation directory selection panel (Figure 6-23) allows you to change
                 how a program feature is installed. Click Next.




              Figure 6-23 Selecting the program features to install

              8. In the Ready to Install the Program window (Figure 6-24), accept the default selection.
                 Then click Install to start the installation.




              Figure 6-24 Ready to Install the Program panel




262   IBM TotalStorage Productivity Center V2.3: Getting Started
9. The installation takes a few minutes. When it is finished, you see the InstallShield Wizard
   Completed window (Figure 6-25). Click Finish to complete the installation.




Figure 6-25 Installation finished

The remote console of IBM Director is now installed.




                          Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   263
Installing the remote console for Productivity Center for Disk
              To install the remote console for Productivity Center for Disk follow these steps:
              1. Insert the installation media for Productivity Center for Disk and Replication Base.
              2. Change to the W2K directory. Figure 6-26 shows the files in that directory.




              Figure 6-26 Files in the W2K directory

              3. Start the LaunchPad.bat batch file. Coincidently this file has the same name as the
                 TotalStorage Productivity Center Launchpad, although it has nothing to do with it.
              4. Click Installation wizard to begin the installation (Figure 6-27 on page 265).




264   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 6-27 Multiple Device Manager LaunchPad

5. For a brief moment, you see a DOS box with the installer being unpacked. When this is
   done, you see the Welcome window shown in Figure 6-28. Click Next.




Figure 6-28 Welcome window


                       Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   265
6. The License Agreement window (Figure 6-29) is displayed. Select I accept the term in
                 the license agreement and click Next.




              Figure 6-29 License Agreement




266   IBM TotalStorage Productivity Center V2.3: Getting Started
7. The Destination Directory window (Figure 6-30) opens. Accept the default path or enter
   the target directory for the installation. Click Next.




Figure 6-30 Installation directory




                          Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   267
8. In the Select Product Type window (Figure 6-31), select Productivity Center for Disk
                 and Replication Base Console for the product type. Click Next.




              Figure 6-31 Installation options




268   IBM TotalStorage Productivity Center V2.3: Getting Started
9. The Preview window (Figure 6-32) contains the installation information. Review it and click
   Install to start the console install.




Figure 6-32 Summary

10.When you reach the Finish window, click Finish to exit the add-on installer (Figure 6-33).




Figure 6-33 Installation finished


                          Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   269
11.You return to the IBM TotalStorage Productivity Center for Disk and Replication Base
                 installer window shown in Figure 6-27 on page 265. Click Exit to end the installation.

              The IBM Director remote console is now installed. The add-ons for IBM TotalStorage
              Productivity Center for Disk and Replication Base have been added.

              If the TotalStorage Productivity Center Launchpad is installed, it detects that the IBM Director
              remote console is available the next time the LaunchPad is started. Also the Launchpad can
              now be used to start IBM Director.


6.3.1 Installing Remote Console for Performance Manager function
              After installing IBM Director Console and TotalStorage Productivity Center for Disk base
              console, you will need to install remote console for Performance Manager function. For this,
              insert the CD-ROM which contains the code for TotalStorage Productivity Center for Disk and
              click setup.exe. In our example, we used the downloaded code as shown in the screenshot
              in Figure 6-34.




              Figure 6-34 Screenshot of our lab download directory location




270   IBM TotalStorage Productivity Center V2.3: Getting Started
Next, you will see Welcome panel shown in Figure 6-35.Click Next.




Figure 6-35 Welcome panel from TotalStorage Productivity Center for Disk installer

The License Agreement panel shown in Figure 6-36 on page 272 appears. Select I accept
the terms in the license agreement and click Next to continue.




                          Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   271
Figure 6-36 Accept the terms of license agreement.

              Choose the default destination directory as shown in Figure 6-37 and click Next.




              Figure 6-37 Choose default destination directory


272   IBM TotalStorage Productivity Center V2.3: Getting Started
In the next panel, choose to install Productivity Center for Disk Client and click Next as
shown in Figure 6-38.




Figure 6-38 Select Product Type




                        Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   273
In the next panel, select both check boxes for products, if you would like to install the console
              and command line client for the Performance Manager function (see Figure 6-39). Click Next.




              Figure 6-39 TotalStorage Productivity Center for Disk features selection




274   IBM TotalStorage Productivity Center V2.3: Getting Started
The Productivity Center for Disk Installer - CoServer Parameters panel opens (see
Figure 6-40). Enter the TPC user ID and password and the IP that the remote console will use
to validate with the TPC server. This is the IP of the TPC server and IBM Director logon.




Figure 6-40 Productivity Center for Disk Installer - CoServer parameters

The Productivity Center for Disk Installer - Preview panel appears (see Figure 6-41 on
page 276). Review the information and click Install to start the process of installing the remote
console.




                          Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   275
Figure 6-41 Productivity Center for Disk Installer - Preview

              When the install is complete you will see the Productivity Center for Disk Installer - Finish
              panel as shown in Figure 6-42. Click Finish to complete the install process.




              Figure 6-42 TotalStorage Productivity Center for Disk finish panel



276   IBM TotalStorage Productivity Center V2.3: Getting Started
6.3.2 Launching Remote Console for TotalStorage Productivity Center
           You can launch the remote console from the TotalStorage Productivity Center desktop icon
           from the remote console server. You will see the window in Figure 6-43.




           Figure 6-43 TotalStorage Productivity Center launch window

           You may click Manage Disk Performance and Replication as highlighted in the figure. This
           will launch IBM director remote console.

           You may logon to director server and start using remote console functions except for
           Replication Manager.

            Note: At this point, you have installed the remote console for Performance Manager
            function only and not for replication manager. You can install remote console for replication
            manager if you wish.




                                    Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk   277
278   IBM TotalStorage Productivity Center V2.3: Getting Started
7


    Chapter 7.   Configuring TotalStorage
                 Productivity Center for
                 Replication
                 This chapter provides information to help you customize the TotalStorage Productivity Center
                 for Replication component of the TotalStorage Productivity Center. In particular, we describe
                 how to set up a remote GUI and CLI.




© Copyright IBM Corp. 2005. All rights reserved.                                                          279
7.1 Installing a remote GUI and CLI
              A replication session can be managed remotely using the graphical user interface (GUI) and
              command line interface (CLI). To install, follow the procedure below.
              1. Copy the suite install and Replication manager code to the computer you wish to use.
              2. In the suite install folder, double click on the setup.exe file to launch the installer wizard.
              3. At the language panel (Figure 7-1), choose the language you wish to use during the
                 install.




              Figure 7-1 Select a language

              4. At the welcome screen (Figure 7-2), click Next.




              Figure 7-2 Welcome screen




280   IBM TotalStorage Productivity Center V2.3: Getting Started
5. The software license agreement panel appears (Figure 7-3). Click the radio button next to
   I accept the terms of the license agreement and click Next to continue.




Figure 7-3 License agreement

6. In the TotalStorage Productivity Center install options panel (Figure 7-4), click the radio
   button next to User interface installations of Data, Disk, Fabric, and Replication and
   click Next.




Figure 7-4 TotalStorage Productivity Center install options




                        Chapter 7. Configuring TotalStorage Productivity Center for Replication   281
7. In the Remote GUI/Command LIne Client component window (Figure 7-5), check the box
                 by The Productivity Center for Replication - Command Line Client and click Next.




              Figure 7-5 Select Remote GUI/Command Line Client

              8. A window opens (Figure 7-6) to begin the replication command line client install.




              Figure 7-6 Replication command client install




282   IBM TotalStorage Productivity Center V2.3: Getting Started
9. In the next window, enter the location of the Replication Manager install package
   (Figure 7-7).




Figure 7-7 Install package location for replication

10.A window opens prompting you to interact with the Replication Manager install wizard
   (Figure 7-8).




Figure 7-8 Launch Replication Manager installer




                         Chapter 7. Configuring TotalStorage Productivity Center for Replication   283
11.The window in Figure 7-9 appears until the install wizard is launched.




              Figure 7-9 Launching installer

              12.The Productivity Center for Replication Installer - Welcome wizard window (Figure 7-10)
                 opens. Click Next.




              Figure 7-10 Replication remote CLI install wizard




284   IBM TotalStorage Productivity Center V2.3: Getting Started
13.Specify the directory path of the Replication Manager installation files in the window
   shown in Figure 7-11. Click Next.




Figure 7-11 Replication remote CLI Installer - destination directory



14.In the CoServer Parameters window shown in Figure 7-12, enter the following information:
   –   Host Name: Host name or IP address of the Replication Manager server
   –   Host Port: Port number of the Replication Manager server (default value is 9443)
   –   User Name: User name of the CIM Agent managing the storage device(s)
   –   User Password: User password of the CIM Agent managing the storage device(s)
   Click Next to continue.




                        Chapter 7. Configuring TotalStorage Productivity Center for Replication   285
Figure 7-12 Replication remote CLI Installer - coserver parameters

              15.Review the information in the Preview window shown in Figure 7-13 and click Install.




              Figure 7-13 Replication remote CLI Installer - preview




286   IBM TotalStorage Productivity Center V2.3: Getting Started
16.After successfully installing the remote CLI, the window in Figure 7-14 appears. Click
   Finish.




Figure 7-14 Replication remote CLI Installer - finish

17.After clicking Finish, the postinstall.txt file opens.You may read the file or close and view it
   at a later time.
18.A window opens informing you of a successful installation (see Figure 7-15). Click Next to
   finish.




Figure 7-15 Remote CLI installation successful



                        Chapter 7. Configuring TotalStorage Productivity Center for Replication   287
288   IBM TotalStorage Productivity Center V2.3: Getting Started
8


    Chapter 8.   Configuring IBM TotalStorage
                 Productivity Center for Data
                 This chapter describes the necessary tasks to start using IBM TotalStorage Productivity
                 Center for Data in your environment. After you install Productivity Center for Data, there are a
                 few remaining steps to perform, but you can start to use it without performing these steps at
                 first. Most people use Productivity Center for Data to look at the environment and see how the
                 storage capacity is distributed. This chapter focuses on what is necessary to fulfill this task.

                 The following procedures are covered in this chapter:
                     Configuring a discovered IBM TotalStorage Enterprise Storage Server (ESS) Common
                     Information Model (CIM) Agent
                     Configuring a discovered Fibre Array Storage Technology (FAStT) CIM Agent
                     Adding a CIM Agent that is located in a remote network
                     Setting up the IBM TotalStorage Productivity Center for Data Web interface
                     Setting up a remote console

                 We also recommend that you perform the following actions, although we do not describe
                 them here:
                     Setting up the alerting dispositions: Simple Network Management Protocol (SNMP),
                     Tivoli Enterprise Console (TEC), and mail
                     Setting up retention of log files and other information




© Copyright IBM Corp. 2005. All rights reserved.                                                             289
8.1 Configuring the CIM Agents
              Configuration of the CIM Agents for IBM TotalStorage Productivity Center for Data is different
              than the configuration you have to perform within Productivity Center for Disk. This section
              explains how to set up the CIM Agents in two ways: if it was discovered by the Data Manager,
              or if the CIM Agent is located in a different subnet and multicasts are not enabled.

              Here is an overview of the procedure to work with CIM Agents:
              1. Perform discovery of a new CIM Agent (using Service Location Protocols (SLP)).
              2. Configure the discovered CIM Agent properties or definition of a new CIM Agent.
              3. Discovery collects the device.
              4. After the characteristics are available, set up the device for monitoring.
              5. A probe on the device gathers information about the disks and logical unit numbers
                 (LUNs).


8.1.1 CIM and SLP interfaces within Data Manager
              The CIM interface within Data Manager is used only to gather information about the disks, the
              LUNs, and some asset information. The data is correlated with the data that the manager
              receives from the agents.

              Since there is no way to install the agent of Data Manager directly on a storage subsystem,
              Data Manager obtains the information from storage subsystems by using the Storage
              Management Initiative - Specification (SMI-S) standard. This standard uses another standard,
              CIM.

              Data Manager uses this interface to access a storage subsystem. A CIM Agent (also called
              CIM Object Manager (OM)) that ideally runs within the subsystem, but can also run on a
              separate host, announces its existence by using the SLP. You can learn more about this
              protocol in 2.3, “Service Location Protocol (SLP) overview” on page 38.

              Within Data Manager, an SLP User Agent (UA) is integrated, and that agents performs a
              discovery of devices. This discovery is limited to the local subnet of the Data Manager, and is
              expanded only if multicasts are enabled on the network routers. See “Multicast” on page 43
              for details.

              Unlike Productivity Center for Disk, the User Agent that is integrated within the Data Manager
              cannot talk to an SLP Directory Agent (DA). This restriction requires you to manually
              configure every storage subsystem that was not automatically discovered.


8.1.2 Configuring CIM Agents
              The procedure to configure a CIM Agent is simple. If a CIM Agent was discovered, you simply
              enter the security information. We use the term CIM Agent instead of CIMOM because this is
              a more generic term.

              Figure 8-1 on page 291 shows the panel where the CIM Agents are configured. In our
              example, the first two entries show CIM Agents that were discovered but are not yet
              configured. The last two entries show an ESS and a FAStT CIM Agent that have already been
              configured.




290   IBM TotalStorage Productivity Center V2.3: Getting Started
If you want to configure a CIM Agent that cannot be discovered because of the restriction
explained in 8.1.1, “CIM and SLP interfaces within Data Manager” on page 290, then you also
need to enter the IP address and select the right protocol.




Figure 8-1 Selecting CIM Agent Logins

If you completed the worksheets (see Appendix A, “Worksheets” on page 991), have them
available for the next steps.

Configuring discovered CIM Agents
For discovered CIM Agents that are not configured, complete these steps:
1. In the CIM/OM Login Administration panel (Figure 8-1), highlight the discovered CIM
   Agent. Click Edit.
2. The Edit CIM/OM Login Properties window (Figure 8-2 on page 292) opens. Proceed as
   follows:
   a. Verify the IP address, port, and protocol.

        Note: Not all CIM Agents provide a secure communication via https. For example,
        FAStT does not provide https, so you have to select http.

   b. Enter the name and password for the user which was configured in the CIM Agent of
      that device.

        Note: At the time of this writing, a FAStT CIM Agent does not use a special user to
        secure the access. Data Manager still requires an input in the user and password
        field, so type anything you want.

   c. If you selected https as the protocol to use, enter the complete path and file name of
      the certificate file that is used to secure the communication between the CIM Agent
      and the Data Manager.

        Note: The Truststore file of the ESS CIM Agent is located in the C:Program
        Filesibmcimagent directory on the CIM Agent Host.



                        Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   291
d. Click Save to finish the configuration.




              Figure 8-2 CIM Agent login properties


              Configuring new CIM Agents
              If you have to enter a CIM Agent manually, click New in the CIM/OM Login Administration
              panel (Figure 8-1 on page 291). The New CIM/OM Login Properties window (Figure 8-3)
              opens. You perform the same steps as described in “Configuring discovered CIM Agents” on
              page 291. For a new CIM Agent, you must also specify the IP address and protocol to use.
              The port is set depending on the protocol.




292   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 8-3 New CIM Agent login properties


            Next steps
            After you configure the CIM Agent properties, run discovery on the storage subsystems.
            During this process, the Data Manager talks to the CIM Agent to gather information about the
            devices.

            When this is completed, you see an entry for the subsystem in the Storage Subsystem
            Administration panel (Figure 8-5 on page 294).


8.1.3 Setting up a disk alias
            Optionally, you can change the name of a disk subsystem to a more meaningful name:
            1. In the Data Manager GUI, in the Navigation Tree, expand Administrative Services →
               Configuration →Data Manager subtree as shown in Figure 8-4. Select Storage
               Subsystem Administration.




            Figure 8-4 Navigation Tree



                                    Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   293
2. The panel shown in Figure 8-5 on page 294 opens.
                  a. Highlight the subsystem.
                  b. Place a check mark in the Monitored column.

                       Note: Select the Monitored column if you want Data Manager to probe the
                       subsystem whenever a probe job is run against it. If you deselect the Monitored
                       check box for a storage subsystem, the following actions occur:
                          All the data gathered by the server for the storage subsystem is removed from
                          the enterprise repository.
                          You can no longer run Monitoring, Alerting, or Policy Management jobs against
                          the storage subsystem.

                  c. Click Set disk alias.




              Figure 8-5 Storage Subsystem Administration

              3. The Set Disk Alias window (Figure 8-6) opens.
                  a. Enter the Alias/Name.
                  b. Click OK to finish.




              Figure 8-6 Set Disk Alias

              4. You may need to refresh the GUI for the changes to become effective. Right-click an old
                 entry in the Navigation Tree, and select Refresh.

              Next steps
              Now that you have set up the CIM Agent properties and specified to monitor the subsystems,
              run a probe against it to collect data about the disks and LUNs. After you do this, you can look
              at the results in different reports.




294   IBM TotalStorage Productivity Center V2.3: Getting Started
8.2 Setting up the Web GUI
          The Web GUI is basically the same as the remote GUI that you can install on any machine.
          You simply use a Web browser to download a Java™ application that is then launched.

          We show only the basic setup of the Web server, which may not be very secure. The objective
          here is to gain access to the Data Manager from a machine that does not have the remote
          GUI installed.

              Attention: We had the Tivoli Agent Manager running on the same machine. The Agent
              Manager comes with an application (the Agent Recovery Service) that uses port 80, so we
              had to find an unused port on the same machine.

              In addition, you must be careful if you use the Internet Information Server (IIS). IIS uses
              several ports by default which may interfere with the installed WebSphere Application
              Server. Therefore we recommend that you use the IBM HTTP Server.


8.2.1 Using IBM HTTP Server
          This section explains how to set up the IBM HTTP Server to make the remote GUI available
          via the Web. When you install WebSphere Application Server on a machine, the IBM HTTP
          Server is installed on the same machine.

          The IBM HTTP server does not come with a GUI for the administration. Instead you use
          configuration files to modify any settings.

          The HTTP server in installed in C:Program FilesWebSphereAppServerHTTPServer. This
          directory contains the conf subdirectory, which contains the httpd.conf file, which is used to
          configure the server.
          1. In C:Program FilesWebSphereAppServerHTTPServerconf directory, open the
             httpd.conf file.
          2. Locate the line where the port is defined. See Example 8-1. Change the port number. In
             our example, we used 2077.

          Example 8-1 Abstracts of the httpd.conf file
          ServerName GALLIUM
          # This is the main server configuration file. See URL http://www.apache.org/
          # for instructions.

          # Do NOT simply read the instructions in here without understanding
          # what they do, if you are unsure consult the online docs. You have been
          # warned.

          # Originally by Rob McCool

          #   Note: Where filenames are specified, you must use forward slashes
          #   instead of backslashes. e.g. "c:/apache" instead of "c:apache". If
          #   the drive letter is omitted, the drive where Apache.exe is located
          #   will be assumed


          ....

          # Port: The port the standalone listens to.
          #Port 80



                                     Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   295
Port 2077


              3. Locate the line AfpaEnable. Comment out the three Afpa... lines as shown in Example 8-2.

              Example 8-2 Afpa
              #AfpaEnable
              #AfpaCache on
              #AfpaLogFile "C:Program FilesWebSphereAppServerHTTPServer/logs/afpalog" V-ECLF




296   IBM TotalStorage Productivity Center V2.3: Getting Started
4. Locate the line that starts with <Directory. Modify the line to point the directory to
   C:Program FilesIBMTPCDatagui as shown in Example 8-3.

Example 8-3 Directory setting
# ---------------------------------------------------------------------------
# This section defines server settings which affect which types of services
# are allowed, and in what circumstances.

# Each directory to which Apache has access, can be configured with respect
# to which services and features are allowed and/or disabled in that
# directory (and its subdirectories).

#   Note: Where filenames are specified, you must use forward slashes
#   instead of backslashes. e.g. "c:/apache" instead of "c:apache". If
#   the drive letter is omitted, the drive where Apache.exe is located
#   will be assumed

# First, we configure the "default" to be a very restrictive set of
# permissions.

#   Note that from this point forward you must specifically allow
#   particular features to be enabled - so if something's not working as
#   you might expect, make sure that you have specifically enabled it
#   below.

# This should be changed to whatever you set DocumentRoot to.

#<Directory "C:Program FilesWebSphereAppServerHTTPServer/htdocs/en_US">
<Directory "C:Program FilesIBMTPCDatagui">

5. Locate the line that starts with DocumentRoot. Modify the line to point the directory to
   C:Program FilesIBMTPCDatagui as shown in Example 8-4.

Example 8-4 DocumentRoot
#   --------------------------------------------------------------------------------
#   In the following section, you define the name space that users see of your
#   http server. This also defines server settings which affect how requests are
#   serviced, and how results should be formatted.

# See the tutorials at http://www.apache.org/ for
# more information.

#   Note: Where filenames are specified, you must use forward slashes
#   instead of backslashes. e.g. "c:/apache" instead of "c:apache". If
#   the drive letter is omitted, the drive where Apache.exe is located
#   will be assumed.

# DocumentRoot: The directory out of which you will serve your
# documents. By default, all requests are taken from this directory, but
# symbolic links and aliases may be used to point to other locations.

#DocumentRoot "C:Program FilesWebSphereAppServerHTTPServer/htdocs/en_US"
DocumentRoot "C:Program FilesIBMTPCDatagui"




                         Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   297
6. Locate the line that starts with DirectoryIndex. Modify the line to use TPCD.html as the
                 index document as shown in Example 8-4 on page 297.

              Example 8-5 Directory index
              # DirectoryIndex: Name of the file or files to use as a pre-written HTML
              # directory index. Separate multiple entries with spaces.

              #DirectoryIndex index.html
              DirectoryIndex tpcd.html

              7. Save the file.
              8. Start the HTTP server.
              9. Open a command prompt.
                  a. Change to the directory C:Program FilesWebSphereAppServerHTTPServer.
                  b. Type apache, and press Enter.
              10.This starts the HTTP server as a foreground application. Now when you use a Web
                 browser, simply enter:
                  http://servername:portumber
                  In our environment, we entered:
                  http://gallium:2077

              You see a Web page, and a Java application is then loaded. (Java is installed if necessary.)

                Note: Do not omit the http://. Since we do not use the default, you have to tell the
                browser which protocol to use.




298   IBM TotalStorage Productivity Center V2.3: Getting Started
8.2.2 Using Internet Information Server
           If you have IIS installed on the server running Data Manager, use these steps to enable the
           access to the remote GUI via a Web site.

            Attention: If you have WebSphere Application Server running on the same server, be
            careful not to create port conflicts, especially since port 80 is in use by both applications.

           1. Start the Internet Information Services administration GUI.
           2. A window opens as shown in Figure 8-7. In the left panel, right-click the entry with your
              host name and select New → Web Site to launch the Web Site Creation Wizard.




           Figure 8-7 Internet Information Server administration GUI




                                    Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   299
3. The Web Site Creation Wizard opens, displaying the Welcome panel (see Figure 8-8).
                 Click Next.




              Figure 8-8 Web Site Creation Wizard

              4. The Web Site Description panel (Figure 8-9) opens. Enter a description in the panel and
                 click Next.




              Figure 8-9 Web Site Description panel




300   IBM TotalStorage Productivity Center V2.3: Getting Started
5. The IP Address and Port Settings panel (Figure 8-10) opens. Enter an unused port
   number and click Next.




Figure 8-10 IP Address and Port Setting panel

6. In the Web Site Home Directory panel (Figure 8-11), enter the home directory of the Web
   server. This is the directory where the files for the remote Web GUI are stored. The default
   is C:Program FilesIBMTPCDatagui.
   Click Next.




Figure 8-11 Web Site Home Directory panel




                        Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   301
7. The Web Site Access Permissions panel (Figure 8-12) opens. Accept the default access
                 permissions, and click Next.




              Figure 8-12 Web Site Access Permissions panel

              8. When you see the window indicating that you have successfully completed the Web Site
                 Creation Wizard (Figure 8-13), click Finish.




              Figure 8-13 Setup finished

              9. In the Internet Information Services window (Figure 8-7 on page 299), right-click the new
                 Web server entry, and select Properties.




302   IBM TotalStorage Productivity Center V2.3: Getting Started
10.The Data Manager Properties window (Figure 8-14) opens.
              a. Select the Documents tab.




           Figure 8-14 Adding a default document

              b. Click Add.
              c. In the window that opens, enter tpcd.html. Click OK.
              d. Click OK to close the Properties window.
           11.This starts the HTTP Server as a foreground application. Now, when you use a Web
              browser, simply enter:
                 http://servername:portumber
              In our installation, we entered:
                 http://gallium:2077

           You see a Web page and a Java application is loaded. (Java is installed if necessary.)

            Note: Do not omit the http://. Since we don’t use the default, you have to tell the browser
            which protocol to use.


8.2.3 Configuring the URL in Fabric Manager
           The user properties file of the Fabric Manager contains settings that control polling, SNMP
           traps destination, and the fully qualified host name of Data Manager. As an administrator, you
           can use srmcp manager service commands to display and set the values in the user
           properties file.

           The srmcp ConfigService set command sets the value of the specified property to a new
           value in the user properties file (user.properties). This command can be run only on the
           manager computer.



                                   Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   303
Issuing a command on Windows
              Use these steps to enter a command using Windows.
              1. Open a command prompt window.
              2. Change the directory to installation directorymanagerbinw32-ix86. The default
                 installation directory is C:Program FilesIBMTPCFabricmanagerbinw32-ix86.
              3. Enter the following command:
                  setenv
              4. Enter the following command:
                  srmcp -u Administrator -p password ConfigService set SRMURL
                  http://data.itso.ibm.com:2077

              The change is picked up immediately. There is no need to restart Fabric Manager.



8.3 Installing the Data Manager remote console
              To install the remote console for Productivity Center for Data, use the procedure explained in
              this section. You can also start the installation using the Suite Installer. However, when the
              Data Manager installer is launched, you begin with the first step of the procedure that follows.
              1. Select language. We selected English (Figure 8-16 on page 305).




              Figure 8-15 Welcome panel

              2. The next panel (see Figure 8-16 on page 305) is the Software License Agreement. Click I
                 accept the terms in the license agreement and click Next to continue.




304   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 8-16 License agreement

3. The next panel allows you to select the components to be installed. For the remote
   console installation, select the User interface installations of Data, Disk, Fabric, and
   Replication (see Figure 8-17). Click Next to continue.




Figure 8-17 Product selection




                        Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   305
4. The next panel allows you to select which Remote GUI will be installed. Select the
                 Productivity Center for Data (see Figure 8-18) and click Next to continue.




              Figure 8-18 Remote GUI selection panel

              5. The next panel is informational (see Figure 8-19) and verifies that the Productivity Center
                 for Data GUI will be installed.




              Figure 8-19 Verification panel




306   IBM TotalStorage Productivity Center V2.3: Getting Started
6. The install package location panel is displayed. Specify the required information (see
   Figure 8-20) and click Next to continue.




Figure 8-20 Install package location

7. Another information panel is displayed (see Figure 8-21) indicating that the product
   installer will be launched. Click Next to continue.




Figure 8-21 The installer will be launched




                         Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   307
8. .In the window that opens, like the one in Figure 8-22, select Install Productivity Center
                 for Data and click Next.




              Figure 8-22 Installation action

              9. The License Agreement panel (Figure 8-23) opens. Select I have read and AGREE to
                 abide by the license agreement above and click Next.




              Figure 8-23 License Agreement


308   IBM TotalStorage Productivity Center V2.3: Getting Started
10.A License Agreement Confirmation window (Figure 8-24) opens. Click Yes to confirm.




Figure 8-24 License Agreement Confirmation

11.The next window that opens prompts you to specify what you want to install (see
   Figure 8-25). In this example, we already had the agent installed on our machine.
   Therefore all options are still available. Select The GUI for reporting and click Next.




Figure 8-25 Installation options




                          Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   309
12.In the Productivity Center for Data Parameters panel (Figure 8-26), enter the Data
                 Manager connection details and a Data Manager server name. Change the port if
                 necessary and click Next.




              Figure 8-26 Data Manager connection details




310   IBM TotalStorage Productivity Center V2.3: Getting Started
13.In the Space Requirements panel (Figure 8-27), you can change the installation directory
   or leave the default. Click Next.




Figure 8-27 Installation directory

14.If the directory does not exist, you see the message shown in Figure 8-28. Click OK to
   continue or Cancel to change the directory.




Figure 8-28 Directory does not exist




                          Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   311
15.You see the window shown in Figure 8-29 indicating that Productivity Center for Data has
                 verified your entries and is ready to start the installation. Click Next to start the installation.




              Figure 8-29 Ready to start the installation




312   IBM TotalStorage Productivity Center V2.3: Getting Started
16.During the installation, you see a progress indicator. When the installation is finished, you
           see the Install Progress panel (Figure 8-30). Click Done to exit the installer.




        Figure 8-30 Installation completed

        The IBM TotalStorage Productivity Center for Data remote console is now installed.

        If the TotalStorage Productivity Center Launchpad is installed, it detects that Productivity
        Center for Data remote console is available the next time the LaunchPad is started. The
        LaunchPad can now be used to start Productivity Center for Data.



8.4 Configuring Data Manager for Databases
        Complete the following steps before attempting to monitor your databases with Data
        Manager.
        1. Go to Administrative Services → Configuration → General → License Keys and
           double-click IBM TPC for Data - Databases (Figure 8-31 on page 314).




                                 Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   313
Figure 8-31 TPC for Data - Databases License Keys

              2. From the list of agents, select those you wish to monitor by checking the box under
                 Licensed (Figure 8-32). After checking the desired boxes, click the RDBMS Logins tab.




              Figure 8-32 TPC for Data - Databases Licensing tab




314   IBM TotalStorage Productivity Center V2.3: Getting Started
3. To successfully scan a database, you must provide a login name and password for each
   instance. Click Add New... (Figure 8-33).




Figure 8-33 RDBMS Logins

4. In the RDBMS Login Editor window, enter the required information:
   –   Database - the database type you wish to monitor
   –   Agent Host - the host you wish to monitor
   –   Instance - the name of the instance
   –   User - login ID for the instance
   –   Password - password for the instance
   –   Port - port where database is listening




Figure 8-34 RDBMS Login Editor




                       Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   315
5. After the database is successfully registered, click OK (Figure 8-35).




              Figure 8-35 RDBMS successfully registered



8.5 Alert Disposition
              This section describes the available alerting options one can configure. This option defines
              how the Alerts are generated when a corresponding event is discovered. This panel is shown
              in Figure 8-36 by going to Administrative Services → Configuration → General → Alert
              Disposition.




              Figure 8-36 Alert Disposition panel




316   IBM TotalStorage Productivity Center V2.3: Getting Started
You can specify these parameters:
   SNMP
   – Community - The name of the SNMP community for sending traps
   – Host - The system (event manager) which will receive the traps
   – Port - The port on which traps will be sent (the standard port is 162)
   TEC (Tivoli Enterprise Console)
   – TEC Server - for sending traps to; the system (TEC) that will receive the traps
   – TEC Port - to which traps will be sent (the standard port is 5529)
   E-mail
   –   Mail Server - The mail server which will be used for sending the e-mail.
   –   Mail Port - The port used for sending the mail to the mail server.
   –   Default Domain - Default domain to be used for sending the e-mail.
   –   Return To - The return address for undeliverable e-mail.
   –   Reply To - The address to use when will replying to an Alert-triggered e-mail.
   Alert Log Disposition
   – Delete Alert Log Records - older than how long the Alert Log files will be kept.




                        Chapter 8. Configuring IBM TotalStorage Productivity Center for Data   317
318   IBM TotalStorage Productivity Center V2.3: Getting Started
9


    Chapter 9.   Configuring IBM TotalStorage
                 Productivity Center for Fabric
                 This chapter explains the steps that you must follow, after you install IBM TotalStorage
                 Productivity Center for Fabric from the CD, to configure the environment.

                 Refer to 4.3.6, “IBM TotalStorage Productivity Center for Fabric” on page 157, which shows
                 the installation procedure for installing IBM TotalStorage Productivity Center for Fabric using
                 the Suite Installer.

                 IBM TotalStorage Productivity Center for Fabric is a rebranding of IBM Tivoli Storage Area
                 Network Manager. Since the configuration process has not changed, the information provided
                 is still applicable.

                 This IBM Redbook complements the IBM Redbook IBM Tivoli Storage Area Network
                 Manager: A Practical Introduction, SG24-6848. You may also want to refer to that redbook to
                 learn about design or deployment considerations, which are not covered in this redbook.




© Copyright IBM Corp. 2005. All rights reserved.                                                             319
9.1 TotalStorage Productivity Center component interaction
              This section discusses the interaction between IBM TotalStorage Productivity Center for
              Fabric and the other IBM TotalStorage Productivity Center components. IBM TotalStorage
              Productivity Center interaction includes external products and devices. IBM TotalStorage
              Productivity Center for Fabric uses standard calls to devices to provide and gather
              information to enable it to provide information about your environment.


9.1.1 IBM TotalStorage Productivity Center for Disk and Replication Base
              When a supported storage area network (SAN) Manager is installed and configured, IBM
              TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for
              Replication leverages the SAN Manager to provide enhanced function. Along with basic
              device configuration functions, such as logical unit number (LUN) creation, allocation,
              assignment, and deletion for single and multiple devices, basic SAN management functions
              such as LUN discovery, allocation, and zoning are provided in one step. In Version 2.1 of
              TotalStorage Productivity Center, IBM TotalStorage Productivity Center for Fabric is the
              supported SAN Manager.

              The set of SAN Manager functions that are exploited are:
                  The ability to retrieve SAN topology information, including switches, hosts, ports, and
                  storage devices.
                  The ability to retrieve and to modify the zoning configuration on the SAN.
                  The ability to register for event notification — this ensures that IBM TotalStorage
                  Productivity Center for Disk is aware when the topology or zoning changes as new
                  devices are discovered by the SAN Manager, and when host LUN configurations change.


9.1.2 SNMP
              IBM TotalStorage Productivity Center for Fabric acts as a Simple Network Management
              Protocol (SNMP) manager to receive traps from managed devices in the event of status
              changes or updates. These traps are used to manage all the devices that the Productivity
              Center for Fabric is monitoring to provide the status window shown by NetView. These traps
              should then be passed onto a product, such as Tivoli Event Console (TEC), for central
              monitoring and management of multiple devices and products within your environment. When
              using the IBM TotalStorage Productivity Center Suite Installer, the SNMP configuration is
              performed for you. If you install IBM TotalStorage Productivity Center for Fabric manually,
              then you need to configure SNMP.

              The NetView code that is provided when you install IBM TotalStorage Productivity Center for
              Fabric is to be used only for this product. If you configure this NetView as your SNMP
              listening device for non-IBM TotalStorage Productivity Center for Fabric purposes, then you
              need to purchase the relevant NetView license.


9.1.3 Tivoli Provisioning Manager
              Tivoli Provisioning Manager uses IBM TotalStorage Productivity Center for Fabric when it
              performs its data resource provisioning. Provisioning is the use of workflows to provide
              resources (data or server) whenever workloads exceed specified thresholds and dictate that
              a resource change is necessary to continue to satisfy service-level agreements or business
              objectives. If the new resources are data resources which are part of the SAN fabric, then
              IBM TotalStorage Productivity Center for Fabric is invoked to provide LUN allocation, path
              definition, or zoning changes as necessary.


320   IBM TotalStorage Productivity Center V2.3: Getting Started
Refer to the IBM Redbook Exploring Storage Management Efficiencies and Provisioning:
            Understanding IBM TotalStorage Productivity Center and IBM TotalStorage Productivity
            Center with Advanced Provisioning, SG24-6373, which presents an overview of the product
            components and functions. It explains the architecture and shows the use of storage
            provisioning workflows.



9.2 Post-installation procedures
            This section discusses the next steps that we performed after the initial product installation
            from the CD, to take advantage of the function IBM TotalStorage Productivity Center for
            Fabric provides. After you install the Fabric Manager, you need to decide on which machines
            you will install the Agent and on which machines you will install the Remote Console. The
            following sections show how to install these components.


9.2.1 Installing Productivity Center for Fabric – Agent
            This section explains how to install the Productivity Center for Fabric – Agent. The installation
            must be performed by someone who has a user ID with administrator rights (Windows) or root
            authority (UNIX).

            We used the Suite Installer to install the Agent. You can also install directly from the
            appropriate subdirectory on the CD. Because the installation is Java based, it looks the same
            on all platforms.
            1. In the window that opens, select the language for installation. We chose English. Click
               Next. You will see the Welcome screen (Figure 9-1). Click Next to continue.




            Figure 9-1 Welcome screen




                                   Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   321
2. The next screen (see Figure 9-2) is the Software License Agreement. Click I accept the
                 terms in the license agreement and click Next to continue.




              Figure 9-2 License Agreement

              3. In the Suite Installer panel (Figure 9-3), select the Agent installations of Data, Fabric,
                 and CIM Agent option. Then click Next.




              Figure 9-3 Suite Installer panel for selecting Agent installations




322   IBM TotalStorage Productivity Center V2.3: Getting Started
4. In the next window, select one or more Agent components (see Figure 9-4). In this
   example, we chose The Productivity Center for Fabric - Agent option. Click Next.




Figure 9-4 Agent type selection panel

5. In the next panel, confirm the components to install. See Figure 9-5. Click Next.




Figure 9-5 Productivity Center for Fabric - Agent confirmation




                        Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   323
6. As shown in Figure 9-6, enter the install package location. In our case, we installed from
                 the I: drive. You will most likely use the product CD. Click Next.




              Figure 9-6 Input location selection panel

              7. A panel (Figure 9-7) opens indicating that the Productivity Center for Fabric installer will be
                 launched. At this point, the Suite Installer is invoking the Installer process for the individual
                 agent install. If you install the Agent directly from the CD, without using the Suite Installer,
                 you commence the process after this point. The Suite Installer masks a few displays from
                 you when it calls the product installer.
                  Click Next.




              Figure 9-7 Product Installer will be launched


324   IBM TotalStorage Productivity Center V2.3: Getting Started
8. In the window that opens, select the language for installation. We chose English. Click
   Next.




Figure 9-8 Welcome screen

9. As In Figure 9-9 on page 326, specify the Fabric Manager Name and Fabric Manager Port
   Number. Type the Fabric Manager Name with the machine name where the Productivity
   Center for Fabric - Manager is installed. If it is a different domain, you must fully qualify the
   server name. In our case, colorado is the machine name of the server where Productivity
   Center for Fabric - Manager is installed.
   The port number is automatically inserted, but you can change it if you used a different
   port when you installed the Productivity Center for Fabric - Manager.
   Click Next.




                        Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   325
Figure 9-9 Fabric Manager name and port option

              10.The Host Authentication password is entered in the panel shown in Figure 9-10. This was
                 specified during the Agent Manager install and is used for agent installs.




              Figure 9-10 Host Authentication password




326   IBM TotalStorage Productivity Center V2.3: Getting Started
11.In the next panel (Figure 9-11), you have the option to change the default installation
   directory. We clicked Next to accept the default.




Figure 9-11 Selecting the installation directory

12.The Agent Information panel (Figure 9-12) asks you to specify a label which is applied to
   the Agent on this machine. We used the name of the machine. The port number is the port
   through which this Agent communicates. Click Next.




Figure 9-12 Agent label and port




                         Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   327
13.In the panel in Figure 9-13, specify the account that the Fabric Agent is to run under. We
                 used the Administrator account.




              Figure 9-13 Fabric agent account

              14.In the Agent Management Information panel (Figure 9-14 on page 329), enter the location
                 of the Tivoli Agent Manager.
                  In our configuration, colorado is the machine name in our Domain Name Server (DNS)
                  where Tivoli Agent Manager is installed. The Registration Password is the password that
                  you used when you installed Tivoli Agent Manager.
                  Click Next.




328   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 9-14 Agent Manager information

15.Finally you see a confirmation panel (Figure 9-15) that shows the installation summary.
   Review the information and click Next.




Figure 9-15 Installation summary panel




                       Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   329
16.You see the installation status bar. Then you see a panel indicating a successful
                 installation (Figure 9-16). Click Finish.




              Figure 9-16 Successful installation panel

              17.The panel in Figure 9-17 indicates the successful install of the Fabric agent.




              Figure 9-17 Successful install of fabric agent panel.

              18.You return to the Suite Installer window where you have the option to install other Agents.
                 Click Cancel to finish.



330   IBM TotalStorage Productivity Center V2.3: Getting Started
Upon successful installation, you notice that nothing is added to the Start menu. The only
           evidence that the Agent is installed and running is that a Service is automatically started.
           Figure 9-18 shows the started Services in our Windows environment.




           Figure 9-18 Common Agent Service indicator

           If you look in the Control Panel, under Add/Remove Programs, there is now an entry for IBM
           TotalStorage Productivity Center for Fabric - Agent. To remove the Agent, you click this entry.


9.2.2 Installing Productivity Center for Fabric – Remote Console
           This section explains how to install the Productivity Center for Fabric – Remote Console.

           Pre-installation tasks
           Before you begin the installation, make sure that you have met the requirements that are
           discussed in the following sections.

           SNMP service installed
           Make sure that you have installed the SNMP service and have an SNMP community name of
           Public defined. For more information, see 3.10, “Installing SNMP” on page 73.

           Existing Tivoli NetView installation
           If you have an existing Tivoli NetView 7.1.4 installation, you can use it with Productivity
           Center for Fabric installation. If you have any other version installed, you must uninstall it
           before you install Productivity Center for Fabric – Remote Console.


                                  Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric    331
Installing the console
              The Productivity Center for Fabric Console remotely displays information about the monitored
              SAN. A user who has Administration rights must perform the installation. At this time of writing
              this redbook, this installation was supported on the Windows 2000 and Windows XP
              platforms. The following steps show a successful installation. We used the Suite Installer to
              install the Console, and the following windows reflect that process.
              1. Select the language. We selected English. The next panel (see Figure 9-19) is the
                 installer Welcome panel.




              Figure 9-19 Installer Welcome panel.




332   IBM TotalStorage Productivity Center V2.3: Getting Started
2. The next screen (see Figure 9-20) is the Software License Agreement. Click I accept the
   terms in the license agreement and click Next to continue.




Figure 9-20 License agreement panel

3. The first Suite Installer window (Figure 9-21) opens. Select the User interface
   Installations of Data, Fabric, and Replication option and then click Next.




Figure 9-21 Suite Installer for selecting Console




                        Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   333
4. In the next panel (Figure 9-22), select one or more remote GUI or command line
                 components. To install the console, select The Productivity Center for Fabric - Remote
                 GUI Client. Click Next.




              Figure 9-22 Selecting the Remote Console

              5. In the installation confirmation panel (Figure 9-23), click Next.




              Figure 9-23 Remote GUI Client installation confirmation




334   IBM TotalStorage Productivity Center V2.3: Getting Started
6. As shown in Figure 9-24, enter the location of the source code for the installation. In most
   cases, this the product CD drive. In our case, we installed the code from the E: drive. Click
   Next.




Figure 9-24 Source code location panel

7. The next panel (Figure 9-25) indicates that the Fabric Installer will be launched. If you
   install the Agent directly from the CD, without using the Suite Installer, you begin the
   process after this point. The Suite Installer masks a few displays from you when it calls the
   product installer. Click Next.




Figure 9-25 Installer will be launched


                        Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   335
8. Select the language. We selected English. The Suite Installer launches the Fabric installer
                 (see Figure 9-26).




              Figure 9-26 Productivity Center for Fabric installer launched

              9. The InstallShield Wizard opens for IBM TotalStorage Productivity Center for Fabric -
                 Console (see Figure 9-27, “InstallShield Wizard for Console” on page 336). Click Next.




              Figure 9-27 InstallShield Wizard for Console




336   IBM TotalStorage Productivity Center V2.3: Getting Started
10.In the next panel, you can specify the location of the directory into which the product will
   be installed. Figure 9-28 shows the default location. Click Next.




Figure 9-28 Default installation directory

11.Specify the name and port number of the host where the Productivity Center for Fabric
   Manager is installed. See Figure 9-29. Click Next.




Figure 9-29 Productivity Center for Fabric Manager details




                         Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   337
12.In the next panel (Figure 9-30), specify a starting port number from which the installer will
                 allocate a series of ports for communication. We used the default. Click Next.




              Figure 9-30 Starting port number

              13.Type the password that you will use for all remote consoles or that the managed hosts will
                 use for authentication with the manager (see Figure 9-31). This password must be the
                 same as the one you entered in the Fabric Manager Installation. Click Next.




              Figure 9-31 Host Authentication panel




338   IBM TotalStorage Productivity Center V2.3: Getting Started
14.Specify the drive where NetView is to be installed or accept the default (see Figure 9-32).
   Click Next.




Figure 9-32 Selecting the NetView installation drive

15.As shown in Figure 9-33, specify a password which will be used to run the NetView
   Service. Then click Next.




Figure 9-33 NetView Service password




                        Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   339
16.A panel opens that displays a summary of the installation (Figure 9-34). Click Next to
                 begin the installation.




              Figure 9-34 Summary panel

              17.The installation completes successfully as indicated by the message in the panel shown in
                 Figure 9-35. Click Next.




              Figure 9-35 Installation successful message




340   IBM TotalStorage Productivity Center V2.3: Getting Started
18.You are prompted to restart your machine (Figure 9-36). You may elect to restart
   immediately or at another time. We chose Yes, restart my computer. Click Finish.




Figure 9-36 restart Computer request

After rebooting your system, you see a new Service is automatically started, as shown in
Figure 9-37.




Figure 9-37 NetView Service

To start the Remote Console, click Start →Programs →Tivoli NetView →NetView
Console.


                       Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   341
9.3 Configuring IBM TotalStorage Productivity Center for
    Fabric
              This section explains how to configure IBM TotalStorage Productivity Center for Fabric.


9.3.1 Configuring SNMP
              When using the IBM TotalStorage Productivity Center Suite Installer, the SNMP configuration
              is performed for you. If you install IBM TotalStorage Productivity Center for Fabric manually,
              then you need to configure the Productivity Center for Fabric.

              There are several ways to configure Productivity Center for Fabric for SNMP traps.

              Method 1: Forward traps to the local Tivoli NetView console
              In this scenario, you set up the devices to send SNMP traps to the NetView console, which is
              installed on the Productivity Center for Fabric Server. Figure 9-38 shows an example of this
              setup.




                                                             Managed Host
                                                               (Agent)
                                             Disk array
                                                                                Managed Host
                                                                                  (Agent)


                              Disk array
                                                                                               Managed Host
                                                                                                 (Agent)




                 Disk array
                                               SAN
                                                                       Switch
                                                                                      SNMP




                                Disk array
                                                          Disk array

                                                                                                              Productivity
                                                                                                              Centre for
                                                                                                              Fabric Manager


              Figure 9-38 SNMP traps to local NetView console




342   IBM TotalStorage Productivity Center V2.3: Getting Started
NetView listens for SNMP traps on port 162, and the default community is public. When the
trap arrives to the Tivoli NetView console, it is logged in the NetView Event browser and then
forwarded to Productivity Center for Fabric as shown in Figure 9-39. Tivoli NetView is
configured during installation of the Productivity Center for Fabric Server for trap forwarding to
the Productivity Center for Fabric Server.




                                                                   Productivity Center for
                                                                       Fabric Server
                              SNMP Trap
                                                    TCP           Tivoli NetView            SAN Manager
                                                    162



   fibre channel switch
                                                                               trapfrwd .conf
                                                                             (trap forwarding
                                                                              to TCP /IP port
                                                                                   9556 )



Figure 9-39 SNMP trap reception

NetView forwards SNMP traps to the defined TCP/IP port, which is the sixth port derived from
the base port defined during installation. We used the base port 9550, so the trap forwarding
port is 9556.

With this setup, the SNMP trap information appears in the NetView Event browser.
Productivity Center for Fabric uses this information for changing the topology map.

 Note: If the traps are not forwarded to Productivity Center for Fabric, the topology map is
 updated based on the information coming from Agents at regular polling intervals. The
 default Productivity Center for Fabric Server installation (including the NetView installation)
 sets up the trap forwarding correctly.

Existing NetView installation
If you installed Productivity Center for Fabric with an existing NetView, you need to set up
trap forwarding:
1. Configure the Tivoli NetView trapfrwd daemon. Edit the trapfrwd.conf file in the
   usrovconf directory. This file has two sections: Hosts and Traps.
   a. Modify the Hosts section to specify the host name and port to forward traps to (in our
      case, port 9556 on host COLORADO.ALMADEN.IBM.COM).
   b. Modify the Traps section to specify which traps Tivoli NetView should forward. The
      traps to forward for Productivity Center for Fabric are:
         1.3.6.1.2 *(Includes MIB-2 traps (and McDATA’s FC Management MIB traps)
         1.3.6.1.3 *(Includes FE MIB and FC Management MIB traps)
         1.3.6.1.4 *(Includes proprietary MIB traps (and QLogic’s FC Management MIB traps))




                          Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric          343
Example 9-1 shows a sample trapfrwd.conf file.

              Example 9-1 trapfrwd.conf file
              [Hosts]
              #host1.tivoli.com 0
              #localhost 1662
              colorado.almaden.ibm.com 9556
              [End Hosts]
              [Traps]
              #1.3.6.1.4.1.2.6.3 *
              #mgmt
              1.3.6.1.2 *
              #experimental
              1.3.6.1.3 *
              #Andiamo
              1.3.6.1.4.1.9524 *
              #Brocade
              1.3.6.1.4.1.1588 *
              #Cisco
              1.3.6.1.4.1.9 *
              #Gadzoox
              1.3.6.1.4.1.1754 *
              #Inrange
              1.3.6.1.4.1.5808 *
              #McData
              1.3.6.1.4.1.289 *
              #Nishan
              1.3.6.1.4.1.4369 *
              #QLogic
              1.3.6.1.4.1.1663 *
              [End Traps]

              2. The trapfrwd daemon must be running before traps are forwarded. Tivoli NetView does
                 not start this daemon by default. To configure Tivoli NetView to start the trapfrwd
                 daemon, enter these commands at a command prompt:
                  ovaddobj usrovlrftrapfrwd.lrf
                  ovstart trapfrwd
              3. To verify that trapfrwd is running, in NetView, select Options →Server Setup. In the
                 Server Setup – Tivoli NetView window (Figure 9-40 on page 345), you see that trapfrwd
                 is running.




344   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 9-40 Trapfwd daemon

After trap forwarding is enabled, configure the SAN components, such as switches, to send
their SNMP traps to the NetView console.

 Note: This type of setup gives you the best results, especially for devices where you
 cannot change the number of SNMP recipients and the destination ports.



Method 2: Forward traps directly to Productivity Center for Fabric
In this example, you configure the SAN devices to send SNMP traps directly to the
Productivity Center for Fabric Server. The receiving port number is the primary port number
plus six ports. In this case, traps are only used to reflect the topology changes and they are
not shown in the NetView Event browser.

 Note: Some of the devices do not allow you to change the SNMP port. They only send
 traps to port 162. In such cases, this scenario is not useful.




                      Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   345
Method 3: Traps to the Productivity Center for Fabric and SNMP console
              In this example, you set up the SAN devices to send SNMP traps to both the Productivity
              Center for Fabric Server and to a separate SNMP console, which you installed in your
              organization. See Figure 9-41.
                               p

                                                              Managed Host
                                                                (Agent)
                                              Disk array
                                                                                 Managed Host
                                                                                   (Agent)


                               Disk array
                                                                                                Managed Host
                                                                                                  (Agent)




                  Disk array
                                                SAN
                                                                        Switch




                                 Disk array
                                                           Disk array


                                                                                                               TotalStorage
                                                                                                               Productivity Center
                                                                                                  SNMP         For Fabric Server
                                                                                                 Console
                                                                                                 port 162




              Figure 9-41 SNMP traps for two destinations

              The receiving port number for the Productivity Center for Fabric Server is the primary port
              number plus six ports. The receiving port number for the SNMP console is 162.

              In this case traps are used to reflect the topology changes and they will display in the SNMP
              console events. The SNMP console, in this case, can be another Tivoli NetView installation or
              any other SNMP management application.

              For such a setup, the devices have to support setting multiple traps receivers and changing
              the trap destination port. Since this functionality is not supported in all devices, we do not
              recommend this scenario.


9.3.2 Configuring the outband agents
              Productivity Center for Fabric Server uses agents to discover the storage environment and to
              monitor the status. These agents are setup in the Agent Configuration panel.
              1. From the NetView console, select SAN →Configuration →Configure Manager.




346   IBM TotalStorage Productivity Center V2.3: Getting Started
2. The SAN Configuration window (Figure 9-42) opens.
                   a. Select the Switches and Other SNMP Agents tab on the left side.




Figure 9-42 Selecting switches and other SNMP agents

                   b. You see the outband agents in the right panel. Define all the switches in the SAN that
                      you want to monitor. To define such an Agent, click Add.
                   c. The Enter IP Address window (Figure 9-43) opens. Enter the host name or IP address
                      of the switch and click OK.




                Figure 9-43 Outband agent definition

                   d. The agent appears in the agent list as shown in Figure 9-42. The state of the agent
                      must be Contacted if you want Productivity Center for Fabric to get data from it.
                   e. To remove an already defined agent, select it and click Remove.

                Defining a logon ID for zone information
                Productivity Center for Fabric can retrieve the zone information from IBM Fibre Channel
                Switches and from Brocade Silkworm Fibre Channel Switches. To accomplish this,
                Productivity Center for Fabric uses application programming interface (API) calls to retrieve
                zoning information. To use this API, Productivity Center for Fabric must login into the switch
                with administrative rights. If you want to see zoning information, you need to specify the login
                ID for the Agents you define.




                                       Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   347
Here is the procedure:
                1. In the SAN Configuration window (Figure 9-42 on page 347), select the defined Agent and
                   click Advanced.
                2. In the SNMP Agent Configuration window (Figure 9-44), enter the user name and
                   password for the switch login. Click OK to save this information.




                Figure 9-44 Logon ID definition

                You can now see zone information for your switches.

                 Tip: You must enter user ID and password information only for one switch in each SAN to
                 retrieve the zoning information. We recommend that you enter this information for at least
                 two switches for redundancy. Enabling more switches than necessary for API zone
                 discovery may slow performance.


9.3.3 Checking inband agents
                After you install agents on the managed systems, as explained in 9.2.1, “Installing
                Productivity Center for Fabric – Agent” on page 321, the Agents should appear in the Agent
                Configuration window with an Agent state of Contacted (see Figure 9-42 on page 347). If the
                Agent does not appear in the panel, check the Agent log file for the cause. You can only
                remove Agents which are no longer responding to the server. Such Agents display a status of
                Not responding, as shown in Figure 9-45.




Figure 9-45 Not responding inband Agent



348    IBM TotalStorage Productivity Center V2.3: Getting Started
9.3.4 Performing an initial poll and setting up the poll interval
                After you set up the Agents and devices for use with the Productivity Center for Fabric Server,
                you perform the initial poll. You can manually poll using the SAN Configuration panel
                (Figure 9-46):
                1. In NetView, select SAN →Configure.
                2. In the SAN Configuration window, click Poll Now to perform a manual poll.

                    Note: Polling takes time, and depends on the size of the SAN.

                3. If you did not configure trap forwarding for the SAN devices, (as described in 9.3.1,
                   “Configuring SNMP” on page 342), you must define the polling interval. In this case, the
                   topology change will not be event driven from the devices, but will be updated regularly at
                   the polling interval.
                   You set the poll interval in the SAN Configuration panel (Figure 9-46). You can specify the
                   polling interval in:
                   –   Minutes
                   –   Hours
                   –   Days: You can specify the time of the day for polling.
                   –   Weeks: You can specify the day of the week and time of the day for polling.
                   After you set the poll interval, click OK to save the changes.




Figure 9-46 SAN Configuration


                 Tip: You do not need to configure the polling interval if all your devices are set to send
                 SNMP traps to the local NetView console or the Productivity Center for Fabric Server.


                                      Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric   349
350   IBM TotalStorage Productivity Center V2.3: Getting Started
10


   Chapter 10.   Deployment of agents
                 Chapter 9, “Configuring IBM TotalStorage Productivity Center for Fabric” on page 319, covers
                 the installation of the managers that are the central part of IBM TotalStorage Productivity
                 Center. During that installation, the Resource Managers of Productivity Center for Data and
                 Productivity Center for Fabric were installed and registered to a Tivoli Agent Manager, either
                 an existing one, or one that was installed as a prerequisite in the first phase of the installation.

                 This chapter explains how to set up the individual agents (subagents) on a managed host.
                 The agents of Data Manager and Fabric Manager are called subagents, because they reside
                 within the scope of the common agent.




© Copyright IBM Corp. 2005. All rights reserved.                                                                351
10.1 Installing the agents
              There are two ways to set up a new subagent on a host, depending on the state of the target
              machine:
                  The common agent is not installed. In this case, install the software using an installer.
                  The common agent is installed. In this case, deploy the agent from the data or fabric
                  manager, or install it using the installer.

              To install the agent, follow these steps:
              1. In the Suite Installer panel (Figure 10-1), select Agent installations of Data, Fabric, and
                 CIM Agent. Click Next.




              Figure 10-1 Suite installer installation action




352   IBM TotalStorage Productivity Center V2.3: Getting Started
2. In the next panel (Figure 10-2), select one or more agents to install. The options include
   the IBM TotalStorage Enterprise Storage Server (ESS) Common Information Model (CIM)
   Agent. However, this agent does not use any functions of Tivoli Agent Manager.




Figure 10-2 Agent selection panel

The next window asks you to enter the location of the installation code. Then the panel that
follows tells you that the individual product installer is launched and you are asked to interact
with it.




                                                         Chapter 10. Deployment of agents    353
10.2 Data Agent installation using the installer
              After the product installer for IBM TotalStorage Productivity Center for Data starts, you can
              choose to install, uninstall, or apply maintenance to this component. Since no component of
              the Productivity Center for Data is installed on the server, only one option is available to
              install.
              1. In the Install window (Figure 10-3), select Install Productivity Center for Data and click
                 Next.




              Figure 10-3 IBM TotalStorage Productivity Center for Data installation action




354   IBM TotalStorage Productivity Center V2.3: Getting Started
2. A window opens showing the license agreement (see Figure 10-4). Select the I have read
   and AGREE to abide by the license agreement above check box and click Next.




Figure 10-4 License agreement

3. The License Agreement Confirmation window (Figure 10-5) opens. It asks you to confirm
   that you have read the license agreement. Click Yes.




Figure 10-5 License Agreement Confirmation




                                                    Chapter 10. Deployment of agents   355
4. In the next panel (Figure 10-6), choose the option of Productivity Center for Data that you
                 want to install. To install the agent locally on the same machine on which the installer is
                 currently running, select An agent on this machine and click Next.




              Figure 10-6 Installation options




356   IBM TotalStorage Productivity Center V2.3: Getting Started
5. In the Productivity Center for Data Parameters panel (Figure 10-7), enter the server name
   and port of your Productivity Center for Data Manager. In our environment, the name of
   the server was gallium, and the port was the default, which is 2078. We did not need to
   use the fully qualified host name, but this may be different in your environment. Click Next.




Figure 10-7 Data Manager server details

   The installer tries to contact the Data Manager server. If this is successful, you see a
   message like Server gallium:2078 connection successful - server parameters
   verified in the Progress Log section of the installation window Figure 10-7.
6. The installer checks whether a common agent is already installed on the machine.
   Because in our environment no common agent was installed on the machine, the installer
   issues the message No compatible Common Agents were discovered so one will be
   installed. See the Progress log in Figure 10-8 on page 358.




                                                        Chapter 10. Deployment of agents      357
7. As shown in Figure 10-8, enter the parameters of the agent:
                  a. Use the suggested default port 9510.
                  b. Deselect Agent should perform a SCAN when first brought up because this may
                     take a long time and you want to schedule this during the night.
                  c. Leave Agent may run scripts sent by server as selected.
                  d. The Agent Registration Information is the password that you specified during the
                     installation of the Tivoli Agent Manager.

                       Note: Do not change the common agent port, because this may prevent the
                       deployment of agents later.

                  e. Click Next to continue the installation.




              Figure 10-8 Parameter for the common agent and Data Agent




358   IBM TotalStorage Productivity Center V2.3: Getting Started
8. In the Space Requirements panel (Figure 10-9), accept the default directory for the
   common agent installation. Click Next to proceed with the installation.




Figure 10-9 Common Agent installation directory

9. If the directory that you specify does not exist, you see the message shown in
   Figure 10-10. Click OK to acknowledge this message and continue the installation.




Figure 10-10 Creating the directory




                                                      Chapter 10. Deployment of agents   359
10.Figure 10-11 shows the last panel before the installation starts. Review the progress log. If
                 you want to review the parameters, click Prev to go to the previous panels. Then click
                 Next.




              Figure 10-11 Review settings




360   IBM TotalStorage Productivity Center V2.3: Getting Started
11.The installation starts and displays the progress in the Install Progress window
           (Figure 10-12). The progress bar is not shown in the picture, but you can see the
           messages in the progress log. When the installation is complete, click Done to end the
           installation.




        Figure 10-12 Installation progress

        12.The installer closes now, and the Suite Installer is active again. It also reports the
           successful installation. Click Next to return to the panel shown in Figure 10-1 on page 352
           to install another agent (for example a Fabric Agent) or click Cancel to exit the installation.



10.3 Deploying the agent
        The deployment of an agent is a convenient way to install a subagent onto multiple machines
        at the same time. You can also use this method if you do not want to install agents directly on
        each machine where an agent should be installed.

        The most important prerequisite software to install on the target machines is the common
        agent. If the common agent is not already installed on the target machine, the deployment will
        not work. For example, if you installed one of the two Productivity Center agents, on the
        targets, you can deploy the other agent using the methods described here.

        At the time of this writing, Suite Installer does not have the option to deploy agents, so you
        have to use the native installer setup.exe program for Fabric Manager. The packaging of Data
        Manager is different and you can use the Suite Installer to install it.




                                                                 Chapter 10. Deployment of agents     361
Note: For agent deployment, you do not need to have the certificate files available,
                because the target machines already have the necessary certificates installed during the
                common agent installation.


              Data Agent
              You can perform this installation from any machine. It does not have to be the Data Manager
              server itself.

              When you use the Suite Installer, there is no option to deploy agents. However, you can
              choose to install an agent, to launch the Data Manager installer, and later deploy an agent
              instead of installing it (see Figure 10-6 on page 356). We did not use the Suite Installer for the
              agent deployment.
              1. Start the installer by running setup.exe from the Data Manager installation CD.
              2. After a few seconds, you see the panel shown in Figure 10-13. If you have the Data
                 Manager or the agent already installed on that machine where you started the installer,
                 select Uninstall Productivity Center for Data or Apply maintenance to Productivity
                 Center for Data. Click Next.




              Figure 10-13 Productivity Center for Data Installation action

              3. A window opens displaying the license agreement (see Figure 10-4 on page 355). Follow
                 the steps as explained in steps 2 on page 355 and 3 on page 355.




362   IBM TotalStorage Productivity Center V2.3: Getting Started
4. In the next window that opens (Figure 10-14), select Agents on other machines. Then
   click Next.




Figure 10-14 Productivity Center for Data Install agents options




                                                            Chapter 10. Deployment of agents   363
5. In the Productivity Center for Data Parameters panel (Figure 10-15), enter the Productivity
                 Center for Data server name and the port number. Then click Next.




              Figure 10-15 Data Manager server details




364   IBM TotalStorage Productivity Center V2.3: Getting Started
6. The installer tries to verify your input by connecting to the Data Manager server. The
   message Server gallium:2078 connection successful - server parameters verified
   is displayed in the progress log (see Figure 10-16) if it is successful. Click Next.
7. In our environment, we did not have a Windows domain, so we entered the details of the
   target machines manually. Click Manually Enter Agents.




Figure 10-16 Select the Remote Agents to install: Manually entering Agents




                                                          Chapter 10. Deployment of agents   365
8. In the Manually Enter Agent window (Figure 10-17), enter the IP address or host name of
                 the target computer and a user ID and password of valid Windows users on that machine.
                  You can only enter more than one machine here, if all the machines can be managed with
                  the same user ID and password. Click OK after you enter all computers that can be
                  managed with the same user ID.




              Figure 10-17 Manually Enter Agents panel




366   IBM TotalStorage Productivity Center V2.3: Getting Started
9. The list with the computers that installs the subagent is updated and now appears as
   shown in Figure 10-18. If you want to install the subagent onto a second computer, but the
   computer uses a different user ID than the previous one, click Manually Enter Agents
   again to enter the information for that second computer. Repeat this step for every
   computer that uses a different user ID and password.
   After you enter all target computers, click Next.




Figure 10-18 Selecting the Remote Agents to install: Computers targeted for a remote agent install

10.At this time, the installer tries to contact the common agent on target computers to get
   information about them. This may take a while, so at first you cannot select anything in the
   window that is presented next (see Figure 10-19 on page 368).
   Look at the progress log in the lower section of the window to determine what is currently
   happening. If the installer cannot contact the target computer, verify that the common
   agent is running. You can do that by looking at the status of the Windows services of the
   target machine. Another way is to open a telnet connection from a Command Prompt to
   that machine on port 9510.
   c:>telnet 9.1.38.104 9510
   If the common agent is running, it listens for requests on that port and opens a connection.
   You simply see a blank screen. If the common agent is not running, you see the message
   Connecting To 9.1.38.104...Could not open a connection to host on port 9510 :
   Connect failed.
   When the installer is done with this step, you see the message Productivity Center for
   Data subagent an 9.1.38.104 will be installed at C:Program
   FilestivoliepTPCData.
   Deselect Agent should perform a SCAN when first brought up, because this may take
   a long time and you want to schedule this during the night. Click Install to start the
   deployment.


                                                           Chapter 10. Deployment of agents      367
Figure 10-19 Common agent status

              11.When the deployment is finished, you see the message shown in Figure 10-20. Review
                 the progress log. Click OK to end the installation.




              Figure 10-20 Agent deployment installation completed




368   IBM TotalStorage Productivity Center V2.3: Getting Started
Fabric Agent
There are differences between the Data Agent deployment and the Fabric Agent deployment.
   To remotely deploy one or many fabric manager subagents, you must be logged on to the
   fabric manager server. This is different than the data subagent deployment where you can
   start the installation from any machine.
   At this time, there is no way to use the Suite Installer, so you have to use the native fabric
   manager installer.
   The Fabric Manager comes with a separate package for the Fabric Agent. Data Manager
   comes with only one installation program for all the possible install options (server, agent,
   remote agent or GUI).

To start the deployment, you start the Fabric Manager installer. You do not start the installer
for the Fabric Agent.
1. Launch setup.exe from the fabric manager installation media.
2. After a Java Virtual Machine is prepared and you select the language of the installer, a
   window opens that prompts you to select the type of installation to perform. See
   Figure 10-21. Select Remote Fabric Agent Deployment and click Next.




Figure 10-21 Installation action

3. A Welcome window opens. Click Next.




                                                         Chapter 10. Deployment of agents    369
4. The IBM License Agreement Panel (Figure 10-22) opens. Select I accept the terms in
                 the license agreement and click Next.




              Figure 10-22 License agreement

              5. The installer connects to the Tivoli Agent Manager and presents a list of hosts. Select the
                 hosts to deploy the agents. See Figure 10-23. Click Next to start the deployment.




              Figure 10-23 Remote host selection




370   IBM TotalStorage Productivity Center V2.3: Getting Started
6. The next panel (Figure 10-24) displays the selected hosts. Verify the information. You can
   click Back to change your selection or click Next to start the installation.




Figure 10-24 Remote host confirmation

7. When the installation is completed, you see a summary window similar to the example in
   Figure 10-25. Click Finish.




Figure 10-25 Agent Deployment summary

Your agent should now be installed on the remote hosts.



                                                       Chapter 10. Deployment of agents   371
372   IBM TotalStorage Productivity Center V2.3: Getting Started
Part 4


Part       4     Using the
                 IBM TotalStorage
                 Productivity Center
                 In this part of the book we provide information about using the components of the
                 IBM TotalStorage Productivity Center product suite. We include a chapter filled with hints and
                 tips about setting up the IBM TotalStorage Productivity Center environment and problem
                 determination basics, as well as a chapter on maintaining the DB2 database.




© Copyright IBM Corp. 2005. All rights reserved.                                                           373
374   IBM TotalStorage Productivity Center V2.3: Getting Started
11


   Chapter 11.   Using TotalStorage Productivity
                 Center for Disk
                 This chapter provides information about the functions of the Productivity Center common
                 base. Components of the Productivity Center common base include these topics:
                     Launching and logging on to TotalStorage Productivity Center
                     Launching device managers
                     Performing device inventory collection
                     Working with the ESS, DS6000, and DS8000 families
                     Working with SAN Volume Controller
                     Working with the IBM DS4000 family (formerly FAStT)
                     Event management




© Copyright IBM Corp. 2005. All rights reserved.                                                           375
11.1 Productivity Center common base: Introduction
              Before using Productivity Center common base features, you need to perform some
              configuration steps. This will permit you to detect storage devices to be managed. Version 2.3
              of Productivity Center common base permits you to discover and manage:
                  ESS 2105-F20, 2105-800, 2105-750
                  DS6000 and DS8000 family
                  SAN Volume Controller (SVC)
                  DS4000 family (formally FAStT product range)

              Provided that you have discovered a supported IBM storage device, Productivity Center
              common base storage management functions will be available for drag-and-drop operations.
              Alternatively, right-clicking the discovered device will display a drop-down menu with all
              available functions specific to it.

              We review the available operations that can be performed in the sections that follow.

                Note: Not all functions of TotalStorage Productivity Center are applicable to all device
                types. For example, you cannot display the virtual disks on a DS4000 because the virtual
                disks concept is only applicable to the SAN Volume Controller. The sections that follow
                cover the functions available for each of the supported device types.



11.2 Launching TotalStorage Productivity Center
              Productivity Center common base along with TotalStorage Productivity Center for Disk and
              TotalStorage Productivity Center for Replication are accessed via the TotalStorage
              Productivity Center Launchpad (Figure 11-1) icon on your desktop. Select Manage Disk
              Performance and Replication to start the IBM Director console interface.




              Figure 11-1 TotalStorage Productivity Center launchpad




376   IBM TotalStorage Productivity Center V2.3: Getting Started
Alternatively access IBM Director from Windows Start → Programs → IBM Director →
        IBM Director Console

        Log on to IBM Director using the superuser id and password defined at installation. Please
        note that passwords are case sensitive. Login values are:-
           IBM Director Server: Hostname of the machine where IBM Director is installed
           User ID: The username to logon with. This is the superuser ID. Enter it in the form
           <hostname><username>
           Password: The case sensitive superuser ID password

        Figure 11-2 shows the IBM Director Login panel you will see after launching IBM Director.




        Figure 11-2 IBM Director Log on



11.3 Exploiting Productivity Center common base
        The Productivity Center common base module adds the Multiple Device Manager submenu
        task on the right-hand Tasks pane of the IBM Director Console as shown in Figure 11-3 on
        page 378.

         Note: The Multiple Device Manager product has been rebranded to TotalStorage
         Productivity Center for Disk and TotalStorage Productivity Center for Replication. You will
         still see the name Multiple Device Manager in some panels and messages.

        Productivity Center common base will install the following sub-components into the Multiple
        Device Manager menu:
           Launch Device Manager
           Launch Tivoli SAN Manager (now called TotalStorage Productivity Center for Fabric)
           Manage CIMOMs
           Manage Storage Units (menu)
           –   Inventory Status
           –   Managed Disks
           –   Virtual Disks
           –   Volumes


                                          Chapter 11. Using TotalStorage Productivity Center for Disk   377
Note: The Manage Performance and Manage Replication tasks that you see in
                Figure 11-3 become visible when TotalStorage Productivity Center for Disk or TotalStorage
                Productivity Center for Replication are installed. Although this chapter covers Productivity
                Center common base, you would have installed either TotalStorage Productivity Center for
                Disk, TotalStorage Productivity Center for Replication, or both.




              Figure 11-3 IBM Director Console with Productivity Center common base


11.3.1 Launch Device Manager
              The Launch Device Manager task may be dragged onto an available storage device.

              For ESS, this will open the ESS Specialist window for a chosen device. For SAN Volume
              Controller, it will launch a browser session to that device. For DS4000 or FAStT devices, the
              function is not available.



11.4 Performing volume inventory
              This function is used to collect the detailed volume information from a discovered device and
              place it into the Productivity Center common base databases. You need to do this at least
              once before Productivity Center common base can start to work with a device.




378   IBM TotalStorage Productivity Center V2.3: Getting Started
When the Productivity Center common base functions are subsequently used to
create/remove LUN’s the volume inventory is automatically kept up to date and it is therefore
not necessary to repeatedly run inventory collection from the storage devices.

Version 2.3 of Productivity Center common base does not currently contain the full feature set
of all functions for the supported storage devices. This will make it necessary to use the
storage devices own management tools for some tasks. For instance you can create new
VDisks with Productivity Center common base on a SAN Volume Controller but you can not
delete them. You will need to use the SAN Volume Controller’s own management tools to do
this. For these types of changes to be reflected in Productivity Center common base an
inventory collection will be necessary to re-synchronize the storage device and Productivity
Center common base inventory.

 Attention: The use of volume inventory is common to ALL supported storage devices and
 must be performed before disk management functions are available.

To start inventory collection, right-click the chosen device and select Perform Inventory
Collection as shown in Figure 11-4.




Figure 11-4 Launch Perform Inventory Collection

A new panel will appear (Figure 11-5 on page 380) as a progress indication that the inventory
process is running. At this stage Productivity Center common base is talking to the relevant
CIMOM to collect volume information from the storage device. After a short while the
information panel will indicated that the collection has been successful. You can now close
this window.



                                 Chapter 11. Using TotalStorage Productivity Center for Disk   379
Figure 11-5 Inventory collection in progress


                Attention: When the panel in Figure 11-5 indicates that the collection has been done
                successfully, it does not necessarily mean that the volume information has been fully
                processed by Productivity Center common base at this point. To track the detailed
                processing status, launch the Inventory Status task as seen in Figure 11-6.




              Figure 11-6 Launch Inventory Status




380   IBM TotalStorage Productivity Center V2.3: Getting Started
To see the processing status of an inventory collection, launch the Inventory Status task as
shown in Figure 11-7.




Figure 11-7 Inventory Status

The example Inventory Status panel seen in Figure 11-7 shows the progress of the
processing for a SAN Volume Controller. Use the Refresh button in the bottom left of the
panel to update it with the latest progress. You can also launch the Inventory Status panel
before starting an inventory collection to watch the process end to end.

In our test lab the inventory process time for an SVC took around 2 minutes, end to end.




                                Chapter 11. Using TotalStorage Productivity Center for Disk   381
11.5 Changing the display name of a storage device
              You can change the display name of a discovered storage device to something more
              meaningful to your organization. Right-click the chosen storage device (Figure 11-8) and
              select the Rename option.




              Figure 11-8 Changing the display name of a storage device

              Enter a more meaningful device name as in Figure 11-9 and click OK.




              Figure 11-9 Entering a user defined storage device name




382   IBM TotalStorage Productivity Center V2.3: Getting Started
11.6 Working with ESS
        This section covers the Productivity Center common base functions that are available when
        managing ESS devices.

        There are two ways to access Productivity Center functions for a given device, and these can
        be seen in Figure 11-10:
           Tasks access: You will see in the right-hand task panel that there are a number of
           available tasks under the Manage Storage Units section. These management functions
           can be invoked by dragging them onto the chosen device. However, not all functions are
           applicable to all supported devices.
           Right-click access: To access all functions available for a specific device, simply
           right-click it to see a drop-down menu of options for that device. Figure 11-10 shows the
           drop-down menu for an ESS.

        Figure 11-10 also shows the functions of TotalStorage Productivity Center for Disk and
        TotalStorage Productivity Center for Replication. Although this chapter only covers the
        Productivity Center common base functions, you would always have installed either
        TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, or
        both.




        Figure 11-10 Accessing Productivity Center common base functions




                                         Chapter 11. Using TotalStorage Productivity Center for Disk   383
11.6.1 ESS Volume inventory
              To view the status of the volumes available within a given ESS device, perform one of the
              following actions:
                  Right-click the ESS device and select Volumes as in Figure 11-11.
                  On the right-hand side under the Tasks column, drag Managed Storage Units →
                  Volumes onto the storage device you want to query.

                Tip: Before volumes can be displayed, as with other storage devices managed by
                Productivity Center common base, an initial inventory must be completed.

                If you try to view volumes for an ESS that has not been inventoried, you will receive a
                notification that this needs to be done. To perform an inventory collection, see 11.4,
                “Performing volume inventory” on page 378.




              Figure 11-11 Working with ESS volumes




384   IBM TotalStorage Productivity Center V2.3: Getting Started
In either case, in the bottom left corner, the status will change from Ready to Starting Task,
          and it will remain this way until the volume inventory appears. Figure 11-12 shows the
          Volumes panel for the select ESS device that will appear.




          Figure 11-12 ESS volume inventory panel



11.6.2 Assigning and unassigning ESS Volumes
          From the ESS volume inventory panel (Figure 11-12), you can modify existing volume
          assignments by either assigning a volume to a new host port(s) or by unassigning a host from
          an existing volume to host port(s) mapping.

          To assign a volume to a host port, select the volume then click the Assign host button on the
          right side of the volume inventory panel (Figure 11-12). You will be presented with a panel like
          the one shown below in Figure 11-13 on page 386.

          Select from the list of available host port world wide port names (WWPNs), and select either a
          single host port WWPN, or select more than one by holding down the control <Ctrl> key and
          selecting multiple host ports. When the desired host ports have been selected for Volume
          assignment, click OK.




                                           Chapter 11. Using TotalStorage Productivity Center for Disk   385
Figure 11-13 Assigning ESS LUN’s

              When you click OK, TotalStorage Productivity Center for Fabric will be called to assist with
              zoning this volume to the host. If TotalStorage Productivity Center for Fabric is not installed,
              you will see a message panel as shown in Figure 11-14.

              When the volume has been successfully assigned to the selected host port, the Assign host
              ports panel will disappear and the ESS Volumes panel will be displayed once again, reflecting
              now the additional host port mapping number in the far right side of the panel, in the Number
              of host ports column.

                Note: If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, “Using
                TotalStorage Productivity Center for Fabric” on page 703, for complete details of its
                operation.

                Also note that TotalStorage Productivity Center for Fabric is only invoked for zoning when
                assigning hosts to ports. It is not invoked to remove zones when hosts are unassigned.




              Figure 11-14 Tivoli SAN Manager warning




386   IBM TotalStorage Productivity Center V2.3: Getting Started
11.6.3 Creating new ESS volumes
          To create new ESS volumes select the Create button from the Volumes panel as seen in
          Figure 11-12 on page 385. The Create volume panel will appear (Figure 11-15).




          Figure 11-15 ESS create volume

          Use the drop-down fields to select the Storage type and choose from Available arrays on
          the ESS. Then enter the number of volumes you want to create and the Volume quantity
          along with the Requested size. Finally select the host ports you want to have access to the
          new volumes from the Defined host ports scrolling list. You can select multiple hosts by
          holding down the control key <Ctrl> while clicking hosts.

          On clicking OK TotalStorage Productivity Center for Fabric will be called to assist with zoning
          the new volumes to the host(s). If TotalStorage Productivity Center for Fabric (formally known
          as TSANM) is not installed you will see a message panel as seen in Figure 11-16.

          If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, “Using
          TotalStorage Productivity Center for Fabric” on page 703 for complete details of its operation.




          Figure 11-16 Tivoli SAN Manager warning




                                           Chapter 11. Using TotalStorage Productivity Center for Disk   387
Figure 11-17 Remove a host path from a volume




              Figure 11-18 Display ESS volume properties


11.6.4 Launch device manager for an ESS device
              This option allows you to link directly to the ESS Specialist of the chosen device:
                  Right-click the ESS storage resource, and select Launch Device Manager.
                  On the right-hand side under the Tasks column, drag Managed Storage Units →
                  Launch Device Managers onto the storage device you want to query.




388   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 11-19 ESS specialist launched by Productivity Center common base



11.7 Working with DS8000
        This section covers the Productivity Center common base functions that are available when
        managing DS8000 devices.

        There are two ways to access Productivity Center functions for a given device, and these can
        be seen in Figure 11-20 on page 390.
           Tasks access: You will see in the right-hand task panel that there are a number of
           available tasks under the Manage Storage Units section. These management function
           can be invoked by dragging them onto the chosen device. However, not all functions are
           applicable to all supported devices.
           Right-click access: To access all functions available for a specific device, simply
           right-click it to see a drop-down menu of options for that device. Figure 11-10 on page 383
           shows the drop-down menu for an ESS.

        Figure 11-20 on page 390 also shows the functions of TotalStorage Productivity Center for
        Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers
        the Productivity Center common base functions, you would always have installed either
        TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, or
        both.




                                         Chapter 11. Using TotalStorage Productivity Center for Disk   389
Figure 11-20 Accessing Productivity Center common base functions


11.7.1 DS8000 Volume inventory
              To view the status of the volumes available within a given DS8000 device, perform one of the
              following actions:
                  Right-click the ESS device and select Volumes as in Figure 11-21 on page 391.
                  On the right-hand side under the Tasks column, drag Managed Storage Units →
                  Volumes onto the storage device you want to query.

                Tip: Before volumes can be displayed, as with other storage devices managed by
                Productivity Center common base, an initial inventory must be completed.

                If you try to view volumes for an DS8000 that has not been inventoried, you will receive a
                notification that this needs to be done. To perform an inventory collection, see 11.4,
                “Performing volume inventory” on page 378.




390   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 11-21 Working with DS8000 volumes

In either case, in the bottom left corner, the status will change from Ready to Starting Task,
and it will remain this way until the volume inventory appears. Figure 11-22 shows the
Volumes panel for the select DS8000 device that will appear.




Figure 11-22 DS8000 volume inventory panel



                                 Chapter 11. Using TotalStorage Productivity Center for Disk   391
11.7.2 Assigning and unassigning DS8000 Volumes
              From the DS8000 volume inventory panel (Figure 11-12 on page 385) you can modify
              existing volume assignments by either assigning a volume to a new host port(s) or by
              unassigning a host from an existing volume to host port(s) mapping.

              To assign a volume to a host port, you can click the Assign host button on the right side of
              the volume inventory panel. You will be presented with a panel like the one in Figure 11-23.
              Select from the list of available host port world wide port names (WWPNs), and select either a
              single host port WWPN, or select more than one, by holding down the control <Ctrl> key and
              selecting multiple host ports. When the desired host ports have been selected for Volume
              assignment, click OK.

              .




              Figure 11-23 Assigning DS8000 LUN’s

              When you click OK, TotalStorage Productivity Center for Fabric will be called to assist with
              zoning this volume to the host. If TotalStorage Productivity Center for Fabric is not installed,
              you will see a message panel as seen in Figure 11-24 on page 393.

              When the volume has been successfully assigned to the selected host port, the Assign host
              ports panel will disappear and the ESS Volumes panel will be displayed once again, reflecting
              now the additional host port mapping number in the far right side of the panel, in the Number
              of host ports column.

                  Note: If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, “Using
                  TotalStorage Productivity Center for Fabric” on page 703 for complete details of its
                  operation.

                  Also note that TotalStorage Productivity Center for Fabric is only invoked for zoning when
                  assigning hosts to ports. It is not invoked to remove zones when hosts are unassigned.




392   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 11-24 Tivoli SAN Manager warning


11.7.3 Creating new DS8000 volumes
          To create new ESS volumes, select the Create button from the Volumes panel as seen in
          Figure 11-12 on page 385. The Create volume panel will appear (Figure 11-25).




          Figure 11-25 DS8000 create volume

          Use the drop-down fields to select the Storage type and choose from Available arrays on
          the DS8000. Then enter the number of volumes you want to create and the Volume quantity
          along with the Requested size. Finally select the host ports you want to have access to the
          new volumes from the Defined host ports scrolling list. You can select multiple hosts by
          holding down the control key <Ctrl> while clicking hosts.

          On clicking OK, TotalStorage Productivity Center for Fabric will be called to assist with zoning
          the new volumes to the host(s). If TotalStorage Productivity Center for Fabric is not installed
          you will see a message panel as shown in Figure 11-26 on page 394.




                                           Chapter 11. Using TotalStorage Productivity Center for Disk   393
I




              Figure 11-26 Tivoli SAN Manager warning


11.7.4 Launch device manager for an DS8000 device
              This option allows you to link directly to the DS8000 device manager of the chosen device:
                  Right-click the DS8000 storage resource, and select Launch Device Manager.
                  On the right-hand side under the Tasks column, drag Managed Storage Units →
                  Launch Device Managers onto the storage device you want to query.
                  We received a message that TotalStorage Productivity Center for Disk could not
                  automatically logon (Figure 11-27). Click OK to get the DS8000 storage manager screen
                  as shown in Figure 11-28 on page 395.




              Figure 11-27 DS8000 storage manager launch warning




394   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 11-28 shows the DS8000 device manager launched by Productivity Center common
base.




Figure 11-28 DS8000 device manager launched by Productivity Center common base




                                Chapter 11. Using TotalStorage Productivity Center for Disk   395
11.8 Working with SAN Volume Controller
              This section covers the Productivity Center common base functions that are available when
              managing SAN Volume Controller subsystems.

              There are two ways to access Productivity Center functions for a given device, and these can
              be seen in Figure 11-29 on page 397:
                  Tasks access: You will see in the right-hand task panel that there are a number of
                  available tasks under the Manage Storage Units section. These management functions
                  can be invoked by dragging them onto the chosen device. However, not all functions are
                  appropriate to all supported devices.
                  Right-click access: To access all functions available for a specific device, right-click it to
                  see a drop-down menu of options for that device. Figure 11-29 on page 397 shows the
                  drop-down menu for a SAN Volume Controller.

                Note: Overall, the SAN Volume Controller functionality offered in Productivity Center
                common base compared to that of the native SAN Volume Controller Web based GUI is
                fairly limited in version 2.1. There is the ability to add existing unmanaged LUNs to existing
                MDisk groups, but there are no tools to remove MDisks from a group or create/delete
                MDisk groups. The functions available for VDisks are similar too. Productivity Center
                common base can create new VDisks in a given MDisk group, but there is little other
                control over the placement of these volumes. It is not possible to remove VDisks or
                reassign them to other hosts using Productivity Center common base.


11.8.1 Working with SAN Volume Controller MDisks
              To view the properties of SAN Volume Controller managed disks (MDisk) as shown in
              Figure 11-30 on page 398, perform one of the following actions:
                  Right-click the SVC storage resource, and select Managed Disks (Figure 11-29 on
                  page 397).
                  On the right-hand side under the Tasks column, drag Managed Storage Units →
                  Managed Disks onto the storage device you want to query.

                Tip: Before SAN Volume Controller managed disk properties (MDisks) can be displayed,
                as with other storage devices managed by Productivity Center common base, an initial
                inventory must be completed.

                If you try to use the Managed Disk function on a SAN Volume Controller that has not been
                inventoried, you will receive a notification that this needs to be done. Refer to 11.4,
                “Performing volume inventory” on page 378 for details on performing this operation.




396   IBM TotalStorage Productivity Center V2.3: Getting Started
Here is the panel for selecting managed disks (Figure 11-29).




Figure 11-29 Select managed disk




                                   Chapter 11. Using TotalStorage Productivity Center for Disk   397
Next, you should see the panel shown in Figure 11-30.




              Figure 11-30 The MDisk properties panel for SAN Volume Controller

              Figure 11-30 shows candidate or unmanaged MDisks, which are available for inclusion into
              an existing MDisk group.

              To add one or more unmanaged disks to an existing MDisk group:
                  Select the MDisk group from the pull-down menu.
                  Select one MDisk from the list of candidate MDisks, or use the <Ctrl> key to select
                  multiple disks.
                  Click the OK button at the bottom of the screen and the selected MDisk(s) will be added to
                  the MDisk group (Figure 11-31 on page 399).




398   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 11-31 Add MDisk to a managed disk group


11.8.2 Creating new MDisks on supported storage devices

           Attention: The Create button, as seen in Figure 11-30 on page 398, is not for creating
           new MDisk groups. It is for creating new MDisks on storage devices serving the SAN
           Volume controller. It is not possible to create new MDisk groups using Version 2.3 of
           Productivity Center common base.

             Select the MDisk group from the pull-down menu (Figure 11-30 on page 398).
             Select the Create button.
             A new panel opens to create the storage volume (Figure 11-32 on page 400).
             Select a device accessible to the SVC (devices not marked by an asterisk).
             Devices marked with an asterisk are not acting as storage to the selected SAN Volume
             Controller. Figure 11-32 on page 400 shows an ESS with an asterisk next to it. This is
             because of the setup on the test environment. Make sure the device you select does not
             have an asterisk next to it.
             Specify the number of MDisks in the Volume quantity and size in the Requested volume
             size.
             Select the Defined SVC ports that should be assigned to these new MDisks.


                                          Chapter 11. Using TotalStorage Productivity Center for Disk   399
Note: If TotalStorage Productivity Center for Fabric is installed and configured, extra
                panels will appear to create appropriate zoning for this operation. See Chapter 14, “Using
                TotalStorage Productivity Center for Fabric” on page 703 for details.

                  Click OK to start a process that will create a new volume on the selected storage device
                  and then add it to the SAN Volume Controllers MDisk group.




              Figure 11-32 Create volumes to be added as MDisks

              Productivity Center common base will now request the specified storage amount from the
              specified back-end storage device (see Figure 11-33).




              Figure 11-33 Volume creation results


400   IBM TotalStorage Productivity Center V2.3: Getting Started
The next step is to add the MDisks to an MDisk group (see Figure 11-34).




Figure 11-34 Assign MDisk to an MDisk group




                                Chapter 11. Using TotalStorage Productivity Center for Disk   401
Figure 11-35 shows the result of adding the mdisk4 to the selected MDisk group.




              Figure 11-35 Result of adding the mdisk4 to the MDisk group


11.8.3 Create and view SAN Volume Controller VDisks
              To create or view the properties of SAN Volume Controller virtual disks (VDisk) as shown in
              Figure 11-36 on page 403, perform one of the following actions:
                  Right-click the SVC storage resource, and select Virtual Disks.
                  On the right-hand side under the Tasks column, drag Managed Storage Units → Virtual
                  Disks onto the storage device you want to query.

              In version 2.3 of Productivity Center common base, it is not possible to delete VDisks. It is
              also not possible to assign or reassign VDisks to a host after the creation process. Keep this
              in mind when working with storage using Productivity Center common base on a SAN
              Volume Controller. These tasks can still be performed using the native SAN Volume
              Controller Web based GUI.




402   IBM TotalStorage Productivity Center V2.3: Getting Started
Tip: Before SAN Volume Controller virtual disk properties (VDisks) can be displayed, as
 with other storage devices managed by Productivity Center common base, an initial
 inventory must be completed.

 If you try to use the Virtual Disk function on a SAN Volume Controller that has not been
 inventoried, you will receive a notification that this needs to be done. To perform an
 inventory collection, see 11.4, “Performing volume inventory” on page 378.




Figure 11-36 Launch Virtual Disks




                                    Chapter 11. Using TotalStorage Productivity Center for Disk   403
Viewing VDisks
              Figure 11-37 shows the VDisk inventory and volume attributes for the selected SAN Volume
              controller.




              Figure 11-37 The VDisk properties panel




404   IBM TotalStorage Productivity Center V2.3: Getting Started
Creating a VDisk
To create a new VDisk, use the Create button as shown in Figure 11-37 on page 404. You
need to provide a suitable VDisk name and select the MDisk group from which you want to
create the VDisk. Specify the number of VDisks to be created and the size in megabytes or
gigabytes that each VDisk should be. Figure 11-38 shows some example input in these fields.




Figure 11-38 SAN Volume Controller VDisk creation

The Host ports section of the VDisk properties panel allows you to use TotalStorage
Productivity Center for Fabric functionality to perform zoning actions to provide VDisk access
to specific host WWPNS. If TSANM is not installed, you will receive a warning

If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, “Using
TotalStorage Productivity Center for Fabric” on page 703 for details on how to configure and
use it.




                                 Chapter 11. Using TotalStorage Productivity Center for Disk   405
Figure 11-39 shows that the creation of the VDisk was successful.




              Figure 11-39 Volume creation results



11.9 Working with DS4000 family or FAStT storage
              This section covers the Productivity Center common base functions that are available when
              managing DS4000 and FAStT type subsystems.

              There are two ways to access Productivity Center functions for a given device, and these can
              be seen in Figure 11-40 on page 407:
                  Tasks access: You will see in the right-hand task panel that there are a number of
                  available tasks under the Manage Storage Units section. These management function
                  can be invoked by dragging them onto the chosen device. However, not all functions are
                  appropriate to all supported devices.
                  Right-click access: To access all functions available for the selected device, right-click it
                  to see a drop-down menu of options for it.

              Figure 11-40 on page 407 shows the functions of TotalStorage Productivity Center for Disk
              and TotalStorage Productivity Center for Replication. Although this chapter only covers the
              Productivity Center common base functions you would always have either or both
              TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication
              installed.




406   IBM TotalStorage Productivity Center V2.3: Getting Started
11.9.1 Working with DS4000 or FAStT volumes
          To view the status of the volumes available within a selected DS4000 or FAStT device,
          perform one of the following actions:
             Right-click the DS4000 or FAStT storage resource, and select Volumes (Figure 11-40).
             On the right-hand side under the Tasks column, drag Managed Storage Units →
             Volumes onto the storage device you want to query.

          In either case, in the bottom left corner, the status will change from Ready to Starting Task
          and it will remain this way until the volume inventory is completed (see Figure 11-41 on
          page 408).

           Note: Before DS4000 or FAStT volume properties can be displayed, as with other storage
           devices managed by Productivity Center common base, an initial inventory must be
           completed.

           Refer to 11.4, “Performing volume inventory” on page 378 for details.




          Figure 11-40 Working with DS4000 and FAStT volumes




                                           Chapter 11. Using TotalStorage Productivity Center for Disk   407
Figure 11-41 DS4000 and FAStT volumes panel

              Figure 11-41 shows the volume inventory for the selected device. From this panel you can
              Create and Delete volumes or assign and unassign volumes to hosts.




408   IBM TotalStorage Productivity Center V2.3: Getting Started
11.9.2 Creating DS4000 or FAStT volumes
          To create new storage volumes on a DS4000 or FAStT, select the Create button from the right
          side of the Volumes panel (Figure 11-41 on page 408). You will be presented with the Create
          volume panel as in Figure 11-42.




          Figure 11-42 DS4000 or FAStT create volumes

          Select the desired Storage Type and array from Available arrays using the drop-down
          menus. Then enter the Volume quantity and Requested volume size of the new volumes.
          Finally select the host posts you wish to assign to the new volumes from the Defined host
          ports scroll box, holding the <Crtl> key to select multiple ports.

          The Defined host ports section of the panel allows you to use TotalStorage Productivity
          Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume
          access to specific host WWPNS. If TSANM is not installed, you will receive the warning
          shown in Figure 11-43.

          If TotalStorage Productivity Center for Fabric is installed refer to Chapter 14, “Using
          TotalStorage Productivity Center for Fabric” on page 703 for details on how to configure and
          use it.




          Figure 11-43 Tivoli SAN Manager warning

          If TotalStorage Productivity Center for Fabric is not installed, click OK to continue.

          You should then see the panels shown below (Figure 11-43 through Figure 11-48 on
          page 412).



                                            Chapter 11. Using TotalStorage Productivity Center for Disk   409
Figure 11-44 Volume creation results (1)




              Figure 11-45 Volume creation results (2)




410   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 11-46 Volume creation results (3)




Figure 11-47 Volume creation results (4)




                                  Chapter 11. Using TotalStorage Productivity Center for Disk   411
Figure 11-48 Volume creation results (5)




412   IBM TotalStorage Productivity Center V2.3: Getting Started
11.9.3 Assigning hosts to DS4000 and FAStT Volumes
          Use this feature to assign hosts to an existing DS4000 or FAStT volume.

          To assign a DS4000 or FAStT volume to a host port, first select a volume by clicking it from
          the volumes panel (Figure 11-41 on page 408). Now click the Assign host button from the
          right side of the Volumes panel. You will be presented with a panel as shown in Figure 11-49.

          Select from the list of available host ports world wide port names (WWPNs), and select
          either a single host port WWPN, or more than one by holding down the control <Ctrl> key and
          selecting multiple host ports. When the desired host ports have been selected for host
          assignment, click OK.




          Figure 11-49 Assign host ports to DS4000 or FAStT

          The Defined host ports section of the panel allows you to use TotalStorage Productivity
          Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume
          access to specific host WWPNS. If TSANM is not installed, you will receive the warning
          shown in Figure 11-50.

          If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, “Using
          TotalStorage Productivity Center for Fabric” on page 703 for details on how to configure and
          use it.




          Figure 11-50 Tivoli SAN Manager warning




                                           Chapter 11. Using TotalStorage Productivity Center for Disk   413
If TotalStorage Productivity Center for Fabric is not installed, click OK to continue
              (Figure 11-51).




              Figure 11-51 DS4000 volumes successfully assigned to a host


11.9.4 Unassigning hosts from DS4000 or FAStT volumes
              To unassign a DS4000 or FAStT volume from a host port, first select a volume by clicking it
              from the volumes panel (Figure 11-41 on page 408). Now click the Unassign host button
              from the right side of the Volumes panel. You will be presented with a panel as in
              Figure 11-52. Select from the list of available host port world wide port names (WWPNs), and
              select either a single host port WWPN, or more than one by holding down the control <Ctrl>
              key and selecting multiple host ports. When the desired host ports have been selected for
              host assignment, click OK.

                Note: If the Unassign host button is grayed out when you have selected a volume, this
                means that there are no current hosts assignments for that volume. If you believe this is
                incorrect, it could be that the Productivity Center common base inventory is out of step with
                this device’s configuration. This situation can arise when an administrator makes changes
                to the device outside of the Productivity Center common base interface.

                To correct this problem, perform an inventory for the DS4000 or FAStT and repeat. Refer
                to 11.4, “Performing volume inventory” on page 378




              Figure 11-52 Unassign host ports from DS4000 or FAStT


414   IBM TotalStorage Productivity Center V2.3: Getting Started
TotalStorage Productivity Center for Fabric is not called to perform zoning cleanup in version
           2.1. This functionality is planned in a future release.




           Figure 11-53 Volume unassignment results


11.9.5 Volume properties




           Figure 11-54 DS4000 or FAStT volume properties




                                           Chapter 11. Using TotalStorage Productivity Center for Disk   415
11.10 Event Action Plan Builder
              The IBM Director includes sophisticated event-handling support. Event Action Plans can be
              set up that specify what steps, if any, should be taken when particular events occur in the
              environment.

              Understanding Event Action Plans
              An Event Action Plan associates one or more event filters with one or more actions.

              For example, an Event Action Plan can be created to send a page to the network
              administrator's pager if an event with a severity level of critical or fatal is received by the IBM
              Director Server. You can include as many event filter and action pairs as needed in a single
              Event Action Plan.

              An Event Action Plan is activated only when you apply it to a managed system or group. If an
              event targets a system to which the plan is applied and that event meets the filtering criteria
              defined in the plan, the associated actions are performed. Multiple event filters can be
              associated with the same action, and a single event filter can be associated with multiple
              actions.

              The list of action templates you can use to define actions are listed in the Actions pane of the
              Event Action Plan Builder window (see Figure 11-55).




              Figure 11-55 Action templates




416   IBM TotalStorage Productivity Center V2.3: Getting Started
Creating an Event Action Plan
Event Action Plans are created in the Event Action Plan Builder window. To open this window
from the Director Console, click the Event Action Plan Builder icon   on the toolbar. The
Event Action Plan Builder window is displayed (see Figure 11-56).




Figure 11-56 Event Action Plan Builder

Here are the tasks to create an Event Action Plan.
1. To begin, do one of the following actions:
   – Right-click Event Actions Plan in the Event Action Plans pane to access the context
     menu, and then select New.
   – Select File → New → Event Action Plan from the menu bar.
   – Double-click the Event Action Plan folder in the Event Action Plans pane (see
     Figure 11-57).




Figure 11-57 Create Event Action Plan




                                 Chapter 11. Using TotalStorage Productivity Center for Disk   417
2. Enter the name you want to assign to the plan and click OK to save the new plan. The new
                 plan entry with the name you assigned is displayed in the Event Action Plans pane. The
                 plan is also added to the Event Action Plans task as a child entry in the Director Console
                 (see Figure 11-58). Now that you have defined an event action plan, you can assign one or
                 more filters and actions to the plan.




              Figure 11-58 New Event Action Plan


                   Notes:
                       You can create a plan without having defined any filters or actions.
                       The order in which you build a filter, action, and Event Action Plan does not matter.

              3. Assign at least one filter to the Event Action Plan using one of the following methods:
                  – Drag the event filter from the Event Filters pane to the Event Action Plan in the Event
                    Action Plans pane.
                  – Highlight the Event Action Plan, then right-click the event filter to display the context
                    menu and select Add to Event Action Plan.
                  – Highlight the event filter, then right-click the Event Action Plan to display the context
                    menu and select Add Event Filter (see Figure 11-59 on page 419).




418   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 11-59 Add events to the action plan

      The filter is now displayed as a child entry under the plan (see Figure 11-60).




Figure 11-60 Events added to action plan




                                  Chapter 11. Using TotalStorage Productivity Center for Disk   419
4. Assign at least one action to at least one filter in the Event Action Plan using one of the
                 following methods:
                  – Drag the action from the Actions pane to the target event filter under the desired Event
                    Action Plan in the Event Action Plans pane.
                  – Highlight the target filter, then right-click the desired action to display the context menu
                    and select Add to Event Action Plan.
                  – Highlight the desired action, then right-click the target filter to display the context menu
                    and select Add Action.
                  The action is now displayed as a child entry under the filter (see Figure 11-61).




              Figure 11-61 Action as child of Display Events Action Plan

              5. Repeat the previous two steps for as many filter and action pairings as you want to add to
                 the plan. You can assign multiple actions to a single filter and multiple filters to a single
                 plan.

                   Note: The plan you have just created is not active because it has not been applied to a
                   managed system or a group. In the next section we explain how to apply an Event
                   Action Plan to a managed system or group.




420   IBM TotalStorage Productivity Center V2.3: Getting Started
11.10.1 Applying an Event Action Plan to a managed system or group
           An Event Action Plan is activated only when it is applied to a managed system or group.
           To activate a plan:
              Drag the plan from the Tasks pane of the Director Console to a managed system in the
              Group Contents pane or to a group in the Groups pane.
              Drag the system or group to the plan.
              Select the plan, right-click the system or group, and select Add Event Action Plan (see
              Figure 11-62).




           Figure 11-62 Notification of Event Action Plan added to group/system(s)

           Repeat this step for all associations you want to make. You can activate the same Event
           Action Plan for multiple systems (see Figure 11-63).




           Figure 11-63 Director with Event Action Plan - Display Events

           Once applied, the plan is activated and displayed as a child entry of the managed system or
           group to which it is applied when the Associations - Event Action Plans item is checked.




                                             Chapter 11. Using TotalStorage Productivity Center for Disk   421
Message Browser
              When an event occurs, the Message Browser (see Figure 11-64) pops up on the server
              console.




              Figure 11-64 Message Browser

              If the message has not yet been viewed, then that Status for that message will be blank.
              When viewed, a checked envelope icon will appear under the Status column next to the
              message.

              To see greater detail on a particular message, select the message in the left pain and click the
              Event Details button (see Figure 11-65).




              Figure 11-65 Event Details window




422   IBM TotalStorage Productivity Center V2.3: Getting Started
11.10.2 Exporting and importing Event Action Plans
           With the Event Action Plan Builder, you can import and export action plans to files. This
           enables you to move action plans quickly from one IBM Director Server to another or to
           import action plans that others have provided.

           Export
           Event Action Plans can be exported to three types of files:
              Archive: Backs up the selected action plan to a file that can be imported into any IBM
              Director Server.
              HTML: Creates a detailed listing of the selected action plans, including its filters and
              actions, in an HTML file format.
              XML: Creates a detailed listing of the selected action plans, including its filters and
              actions, in an XML file format.

           To export an Event Action Plan, do the following steps:
           1. Open the Event Action Plan Builder.
           2. Select an Event Action Plan from those available under the Event Action Plan folder.
           3. Select File → Export, then click the type of file you want to export to (see Figure 11-66).
              If this Event Action Plan will be imported by an IBM Director Server, then select Archive.




           Figure 11-66 Archiving an Event Action Plan




                                            Chapter 11. Using TotalStorage Productivity Center for Disk   423
4. Name the archive and set a location to save in the Select Archive File for Export window
                 as shown in Figure 11-67.




              Figure 11-67 Select destination and file name


                Tip: When you export an action plan, regardless of the type, the file is created on a local
                drive on the IBM Director Server. If an IBM Director Console is used to access the IBM
                Director Server, then the file could be saved to either the Server or the Console by
                selecting Server or Local from the Destinations pull-down. It cannot be saved to a network
                drive. Use the File Transfer task if you want to copy the file elsewhere.




424   IBM TotalStorage Productivity Center V2.3: Getting Started
Import
Event Action Plans can be imported from a file, which must be an Archive export of an action
plan from another IBM Director Server. Follow these steps to import an Event Action Plan:
1. Transfer the archive file to be imported to a drive on the IBM Director Server.
2. Open the Event Action Plan Builder from the main Console window.
3. Click File → Import → Archive (see Figure 11-68).




Figure 11-68 Importing an Event Action Plan

4. From the Select File for Import window (see Figure 11-69), select the archive file and
   location. The file must be located on the IBM Director Server. If using the Console, you
   must transfer the file to the IBM Director Server before it can be imported.




Figure 11-69 Select file for import


                                      Chapter 11. Using TotalStorage Productivity Center for Disk   425
5. Click OK to begin the import process.
                  The Import Action Plan window opens, displaying the action plan to import (see
                  Figure 11-70). If the action plan had been assigned previously to systems or groups, you
                  will be given the option to preserve associations during the import. Select Import to
                  complete the import process.




              Figure 11-70 Verifying import of Event Action Plan




426   IBM TotalStorage Productivity Center V2.3: Getting Started
12


   Chapter 12.   Using TotalStorage Productivity
                 Center Performance Manager
                 This chapter provides a step-by-step guide to help you configure and use the Performance
                 Manager functions provided by the TotalStorage Productivity Center for Disk.




© Copyright IBM Corp. 2005. All rights reserved.                                                       427
12.1 Exploiting Performance Manager
              You can use the Performance Manager component of TotalStorage Productivity Center for
              Disk to manage and monitor the performance of the storage devices that TotalStorage
              Productivity Center for Disk supports.

              Performance Manager provides the following functions:
                  Collecting data from devices
                  Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server
                  (ESS) and IBM TotalStorage SAN Volume Controller in the first release.
                  Configuring performance thresholds
                  You can use the Performance Manager to set performance thresholds for each device
                  type. Setting thresholds for certain criteria allows Performance Manager to notify you
                  when a certain threshold has been crossed, thus enabling you to take action before a
                  critical event occurs.
                  Viewing performance data
                  You can view performance data from the Performance Manager database using the gauge
                  application programming interfaces (APIs). These gauges present performance data in
                  graphical and tabular forms.
                  Using Volume Performance Advisor (VPA)
                  The Volume performance advisor is an automated tool that helps you select the best
                  possible placement of a new LUN from a performance perspective. This function is
                  integrated with Device Manager so that, when the VPA has recommended locations for
                  requested LUNs, the LUNs can ne allocated and assigned to the appropriate host without
                  going back to Device Manager.
                  Managing workload profiles
                  You can use Performance Manager to select a predefined workload profile or to create a
                  new workload profile that is based on historical performance data or on an existing
                  workload profile. Performance Manager uses these profiles to create a performance
                  recommendation for volume allocation on an IBM storage server.

              The installation of the Performance Manager component onto an existing TotalStorage
              Productivity Center for Disk server provides a new ‘Manage Performance’ task tree
              (Figure 12-1) on the right-hand side of the TotalStorage Productivity Center for Disk host. This
              task tree includes the various elements shown.




              Figure 12-1 New Performance Manager tasks


428   IBM TotalStorage Productivity Center V2.3: Getting Started
12.1.1 Performance Manager GUI
           The Performance Manager Graphical User Interface can be launched from the IBM Director
           console Interface. After logging on to IBM Director, you will see a screen as in Figure 12-2.
           On the rightmost Tasks pane, you will see Manage Performance launch menu. It is
           highlighted and expanded in the figure shown.




           Figure 12-2 IBM Director Console with Performance Manager


12.1.2 Performance Manager data collection
           To collect performance data for the Enterprise Storage Server (ESS), Performance Manager
           invokes the ESS Specialist server, setting a particular performance data collection frequency
           and duration of collection. Specialist collects the performance statistics from an ESS,
           establishes a connection, and sends the collected performance data to Performance
           Manager. Performance Manager then processes the performance data and saves it in
           Performance Manager database tables.

           From this section you can create data collection tasks for the supported, discovered IBM
           storage devices. There are two ways to use the Data Collection task to begin gathering device
           performance data.
           1. Drag and drop the data collection task option from the right-hand side of the Multiple
              Device Manager application, onto the Storage Device for which you want to create the new
              task.


                              Chapter 12. Using TotalStorage Productivity Center Performance Manager   429
2. Or, right-click a storage device in the center column, and select the Performance Data
                 Collection Panel menu option as shown in Figure 12-3.




              Figure 12-3 ESS tasks panel

              Either operation results in a new window named Create Performance Data Collection Task
              (Figure 12-4).

              In this window you will specify:
                  A task name
                  A brief description of the task
                  The sample frequency in minutes
                  The duration of data collection task (in hours)




              Figure 12-4 Create Performance Data Collection Task for ESS


430   IBM TotalStorage Productivity Center V2.3: Getting Started
In our example, we are setting up a data collection task on an ESS with Device ID
2105.16603. After we have created a task name Cottle _ESS with sample frequency of 5
minutes and duration is 1 hour.

It is possible to add more ESSs to the same data collection task, by clicking the Add button
on the right-hand side. You can click individual devices, or select multiples by making use of
the Ctrl key. See Figure 12-5 for an example of this panel. In our example, we created a task
for the ESS with device ID 2105.22513.




Figure 12-5 Adding multiple devices to a single task




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   431
Once we have established the scope of our data collection task and have clicked the OK
              button, we see our new data collection task available in the right-hand task column (see
              Figure 12-6). We have created task Cottle_ESS in the example.

                Tip: When providing a description for a new data collection task, you may elect to provide
                information about the duration and frequency of the task.




              Figure 12-6 A new data collection task




432   IBM TotalStorage Productivity Center V2.3: Getting Started
In order to schedule it, right-click the selected task (see Figure 12-7).




Figure 12-7 Scheduling new data collection task

You will see another window as shown in Figure 12-8.




Figure 12-8 Scheduling task

You have the option to use the job scheduling facility of TotalStorage Productivity Center for
Disk, or to execute the task immediately.




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   433
If you select Execute Now, you will see a panel similar to the one in Figure 12-9, providing
              you with some information about task name and task status, including the time the task was
              initialized.




              Figure 12-9 Task progress panel

              If you would rather schedule the task to occur at a future time, or to specify additional
              parameters for the job schedule, you can walk through the panel in Figure 12-10. You may
              provide a scheduled job description for the scheduled job. In our example, we created a job,
              24March Cottle ESS.




              Figure 12-10 New scheduled job panel


434   IBM TotalStorage Productivity Center V2.3: Getting Started
12.1.3 Using IBM Director Scheduler function
           You may specify additional scheduled job parameters by using the Advanced button.
           You will see the panel in Figure 12-11. You can also launch this panel from IBM Director
           Console → Tasks → Scheduler → File → New Job. You can also set up the repeat
           frequency of the task.




           Figure 12-11 New scheduled job, advanced tab

           Once you are finished customizing the job options, you may save it using File → Save as
           menu.

           Or, you may do this by clicking the diskette icon     in the top left corner of the advanced
           panel.




                              Chapter 12. Using TotalStorage Productivity Center Performance Manager   435
When you save with advanced job options, you may provide a descriptive name for the job as
              shown in Figure 12-12.




              Figure 12-12 Save job panel with advanced options

              You should receive a confirmation that your job has been saved as shown in Figure 12-13.




              Figure 12-13 scheduled job is saved




436   IBM TotalStorage Productivity Center V2.3: Getting Started
12.1.4 Reviewing data collection task status
           You can review the task status using Task Status under the rightmost column Tasks. See
           Figure 12-14.




           Figure 12-14 Task Status




                              Chapter 12. Using TotalStorage Productivity Center Performance Manager   437
Upon double-clicking Task Status, it launches the following panel as shown in Figure 12-15.




              Figure 12-15 Task Status Panel

              To review the task status, you can click the task shown under the Task name column.
              For example, we selected the task FCA18P, which was aborted, as shown in Figure 12-16 on
              page 439. Subsequently, it will show the details with Device ID, Device status, and Error
              Message ID in the Device status box. You can click the entry in the device status box. It will
              further show the Error message in the Error message box.




438   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 12-16 Task status details


12.1.5 Managing Performance Manager Database
          The collected performance data is stored in a back-end DB2 database. This database needs
          to be maintained in order to keep only relevant data in the database. Your may decide on a
          frequency for purging old data based on your organization’s requirements.




                              Chapter 12. Using TotalStorage Productivity Center Performance Manager   439
The performance database panel can be launched on clicking Performance Database as
              shown in Figure 12-17. It will display the Performance Database Properties panel as shown in
              Figure 12-18 on page 441.




              Figure 12-17 Launch Performance Manager database




440   IBM TotalStorage Productivity Center V2.3: Getting Started
You can use the performance database panel to specify properties for a performance
database purge task. The sizing function on this panel shows used space and free space in
the database. You can choose to purge performance data based on age of the data, the type
of the data, and the storage devices associated with the data (Figure 12-18).




Figure 12-18 Properties of Performance database

The Performance database properties panel shows the following data:
   Database name
   The name of the database
   Database location
   The file system on which the database resides.
   Total file system capacity
   The total capacity available to the file system, in gigabytes.
   Space currently used on file system
   Space is shown in gigabytes and also by percentage.
   Performance manager database full
   The amount of space used by Performance Manager. The percentage shown is the
   percentage of available space (total space - currently used space) used by the
   Performance Manager database.




                   Chapter 12. Using TotalStorage Productivity Center Performance Manager   441
The following formula is used to derive the percentage of disk space full in the
                  Performance Manager database:
                  a = the total capacity of the file system
                  b = the total allocated space for Performance Manager database on the file system
                  c = the portion of the allocated space that is used by the Performance Manager database
                  For any decimal amount over a particular number, the percentage is rounded up to the
                  next largest integer. For example, 5.1% is rounded to and displayed as 6%.
                  Space status advisor
                  The Space status advisor monitors the amount of space used by the Performance
                  Manager database and advises you as to whether you should purge data. The advisor
                  levels are:
                  Low: You do not need to purge data now
                  High: You should purge data soon
                  Critical: You need to purge data now
                  Disk space thresholds for status categories:
                  low if utilization <0.8, high if 0.8 <= utilization <0.9 and critical otherwise.
                  The delimiters between low/high/critical are 80% and 90% full. Purge database options
                  Groups the database purge information.
                   Name Type
                  A name for the performance database purge task. The maximum length for a name can be
                  from 1 to 250 characters.
                   Description (optional)
                  Type a description for the performance database purge task. The maximum length for a
                  description can be from 1 to 250 characters.
                  Device type
                  Select one or more storage device types for the performance database purge. Options are
                  SVC, ESS, or All. (Default is All.)
                  Purge performance data older than
                  Select the maximum age for data to be retained when the purge task is run. You can
                  specify this value in days (1-365) or years (1-10). For example, if you select the Days
                  button and a value of 10, the database purge task will purge all data older than 10 days
                  when it is run. Therefore, if it has been more than 10 days since the task was run, all
                  performance data would be purged. Defaults are 365 days or 10 years.
                  Purge data containing threshold exception information
                  Deselecting this option will preserve performance data that contains information about
                  threshold exceptions. This information is required to display exception gauges. This option
                  is selected by default.
                  Save as task button
                  When you click Save as task, the information you specified is saved and the panel closes.
                  The newly created task is saved to the IBM Director Task pane under the Performance
                  Manager Database. Once it is saved, the task can be scheduled using the IBM Director
                  scheduler function.




442   IBM TotalStorage Productivity Center V2.3: Getting Started
12.1.6 Performance Manager gauges
          Once data collection is complete, you may use the Gauges task to retrieve information about
          a variety of storage device metrics.

          Gauges are used to tunnel down to the level of detail necessary to isolate performance issues
          on the storage device. To view information collected by the Performance Manager, a gauge
          must be created or a custom script written to access the DB2 tables/fields directly.

          Creating a gauge
          Open the IBM Director and do one of the following tasks:
             Right-click the storage device in the center pane and select Gauges (see Figure 12-19).




          Figure 12-19 Right-click gauge opening

             You can click Gauges on the panel shown and it will produce the Job Status window as
             shown in Figure 12-21 on page 444. It is also possible to launch Gauge creation by
             expanding Multiple Device Manager - Manage Performance in the rightmost column. Drag
             the Gauges item to the storage device desired and drop to open the gauges for that device
             (see Figure 12-20 on page 444).




                             Chapter 12. Using TotalStorage Productivity Center Performance Manager   443
Figure 12-20 Drag-n-drop gauge opening

              This will produce the Job status window (see Figure 12-21) while the Performance gauges
              window opens. You will see the Job status window while other selected windows are opening.




              Figure 12-21 Opening Performance gauges job status

              The Performance gauges window will be empty until a gauge is created for use. We have
              created three gauges.(see Figure 12-22).




              Figure 12-22 Performance gauges

              Clicking the Create button to the left brings up the Job status window while the Create
              performance gauge window opens.




444   IBM TotalStorage Productivity Center V2.3: Getting Started
The Create performance gauge window changes values depending on whether the cluster,
array, or volume items are selected in the left pane. Clicking the cluster item in the left pane
produces a window as seen in Figure 12-23.




Figure 12-23 Create performance gauge - Performance

Under the Type pull-down menu, select Performance or Exception.

Performance
Cluster Performance gauges provide details on the average cache holding time in seconds as
well as the percent of I/O requests that were delayed due to NVS memory shortages. Two
Cluster Performance gauges are required per ESS to view the available historical data for
each cluster. Additional gauges can be created to view live performance data.
   Device: Select the storage device and time period from which to build the performance
   gauge. The time period can be changed for this device within the gauge window, thus
   allowing an overall or detailed view of the data.
   Name: Enter a name that is both descriptive of the type of gauge as well as the detail
   provided by the gauge. The name must not contain white space, special characters, or
   exceed 100 characters in length. Also, the name must be unique on the TotalStorage
   Productivity Center for Disk Performance Manager Server.
   If “test” were used as a gauge name, then it could not be used for another gauge -— even
   if another storage device were selected — as it would not be unique in the database.
   Example names: 28019P_C1H would represent the ESS serial number (28019), the
   performance gauge type (P), the cluster (C1), and historical (H), while 28019E would
   represent the exception (E) gauge for the same ESS. Gauges for the clusters and arrays
   would build on that nomenclature to group the gauges by ESS on the Gauges window.


                   Chapter 12. Using TotalStorage Productivity Center Performance Manager     445
Description: Use this space to enter a detailed description of the gauge that will appear
                  on the gauge and in the Gauges window.
                  Metric(s): Click the metric(s) that will be display by default when the gauge is opened for
                  viewing. Those metrics with the same value under the Units column in the Metrics table
                  can be selected together using either Shift mouse-click or Ctrl mouse-click.
                  The metrics in this field can be changed on the historical gauge after the gauge has been
                  opened for viewing. In other words, a historical gauge for each metric or group of metrics
                  is not necessary. However, these metrics cannot be changed for live gauges. A new
                  gauge is required for each metric or group of metrics desired.
                  Component: Select a single device from the Component table. This field cannot be
                  changed when the gauge is opened for viewing.
                  Data points: Selecting this radio button enables the gauge to display most recent data
                  being obtained from currently running performance collectors against the storage device.
                  One most recent performance data gauge is required per cluster and per metric to view
                  live collection data.
                  The Device pull-down menu displays text informing the user whether or not a performance
                  collection task is running against this Device.
                  You can select number of datapoints for your requirements to display the last “x” data
                  points from the date of the last collection. The data collection could be currently running or
                  the most recent one.
                  Date Range: Selecting this radio button presents data over a range of dates/times. Enter
                  the range of dates this gauge will use as a default for the gauge.
                  The date and time values may be adjusted within the gauge to any value before or after
                  the default values and the gauge will display any relevant data for the updated time period.
                  Display gauge: Checking this box will display the newly created gauge after clicking the
                  OK button. Otherwise, if left blank, the gauge will be saved without displaying.




446   IBM TotalStorage Productivity Center V2.3: Getting Started
Click the OK button when ready to save the performance gauge (see Figure 12-24). In this
example, we have created a gauge with the name 22513C1H and the description, average
cache holding time. We selected a starting and ending date as 11-March-2005. This
corresponds with our data collection task schedule.




Figure 12-24 Ready to save performance gauge

The gauge appears after clicking the OK button with the Display gauge box checked or when
the Display button is clicked after selecting the appropriate gauge on the Performance
gauges window (see Figure 12-26 on page 448). If you decide not to display gauge and save
only, then you will see a panel as shown here in Figure 12-25.




Figure 12-25 Saved performance gauges




                  Chapter 12. Using TotalStorage Productivity Center Performance Manager   447
Figure 12-26 Cluster performance gauge - upper

              The top of the gauge contains the following labels:
                  Graph Name                      The name of the gauge
                  Description                     The description of the gauge
                  Device                          The storage device selected for the gauge
                  Component level                 Cluster, array, volume
                  Component ID                    The ID # of the component (cluster, array, volume)
                  Threshold                       The thresholds that were applied to the metrics
                  Time of last data collection Date and time of the last data collection

              The center of the gauge contains the only fields that may be altered in the Display Properties
              section. The Metrics may be selected either individually or in groups as long as the data types
              are the same (for example, seconds with seconds, milliseconds with milliseconds, or percent
              with percent). Click the Apply button to force a Performance Gauge section update with the
              new y-axis data.

              The Start Date:, End Date:, Start Time:, and End Time: fields may be varied to either expand
              the scope of the gauge or narrow it for a more granular view of the data. Click the Apply
              button to force a Performance Gauge section update with the new x-axis data. For example,
              we applied Total I/O Rate metric to the saved gauge, and the resultant graph is shown in
              Figure 12-27 on page 449. Here, the Performance Gauge section of the gauge displays
              graphically, the information over time selected by the gauge and the options in the Display
              Properties section.




448   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 12-27 Cluster performance gauge with applied I/O rate metric

Click the Refresh button in the Performance Gauge section to update the graph with the
original metrics and date/time criteria. The date and time of the last refresh appear to the right
of the Refresh button. The date and time displayed are updated first, followed by the contents
of the graph, which can take up to several minutes to update.

Finally, the data used to generate the graph is displayed at the bottom of the window (see
Figure 12-28 on page 450). Each of the columns in the data section can be sorted up or down
by clicking the column heading (see Figure 12-32 on page 453). The sort reads the data from
left to right, so the results may not be as expected.

The gauges for the array and volume components function in the same manner as the cluster
gauge created above.




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager    449
Figure 12-28 Create Performance Gauge- Lower

              Exception
              Exception gauges display data only for those active thresholds that were crossed during the
              reporting period. One Exception gauge displays threshold exceptions for the entire storage
              device based on the thresholds active at the time of collection.




450   IBM TotalStorage Productivity Center V2.3: Getting Started
To create an exception gauge, select Exception from the Type pull-down menu (see
Figure 12-29).




Figure 12-29 Create performance gauge - Exception

By default, the Cluster will be highlighted in the left pane and the metrics and component
sections will not be available.
   Device: Select the storage device and time period from which to build the performance
   gauge. The time period can be changed for this device within the gauge window thus
   allowing an overall or detailed view of the data.
   Name: Enter a name that is both descriptive of the type of gauge as well as the detail
   provided by the gauge. The name must not contain white space, special characters, or
   exceed 100 characters in length. Also, the name must be unique on the TotalStorage
   Productivity Center for Disk Performance Manager Server.
   Description: Use this space to enter a detailed description of the gauge that will appear
   on the gauge and in the Gauges window
   Date Range: Selecting this radio button presents data over a range of dates/times. Enter
   the range of dates this gauge will use as a default for the gauge.
   The date and time values may be adjusted within the gauge to any value before or after
   the default values and the gauge will display any relevant data for the updated time period.
   Display gauge: Checking this box will display the newly created gauge after clicking the
   OK button. Otherwise, if left blank, the gauge will be saved without displaying.




                   Chapter 12. Using TotalStorage Productivity Center Performance Manager    451
Click the OK button when ready to save the performance gauge. We created an exception
              gauge as shown in Figure 12-30.




              Figure 12-30 Ready to save exception gauge

              The top of the gauge contains the following labels:
                  Graph Name                      The name of the gauge
                  Description                     The description of the gauge
                  Device                          The storage device selected for the gauge
                  Threshold                       The thresholds that were applied to the metrics
                  Time of last data collection Date and time of the last data collection

              The center of the gauge contains the only fields that may be altered in the Display Properties
              section. The Start Date: and End Date: fields may be varied to either expand the scope of the
              gauge or narrow it for a more granular view of the data. Click the Apply button to force an
              Exceptions Gauge section update with the new x-axis data.

              The Exceptions Gauge section of the gauge displays graphically, the information over time
              selected by the gauge, and the options in the Display Properties section (see Figure 12-31 on
              page 453).




452   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 12-31 Exceptions gauge - upper

Click the Refresh button in the Exceptions Gauge section to update the graph with the
original date criteria. The date and time of the last refresh appear to the right of the Refresh
button. The date and time displayed are updated first, followed by the contents of the graph,
which can take up to several minutes to update. Finally, the data used to generate the graph
are displayed at the bottom of the window.

Each of the columns in the data section can be sorted up or down by clicking the column
heading (see Figure 12-32).




Figure 12-32 Data sort options




                   Chapter 12. Using TotalStorage Productivity Center Performance Manager    453
Display Gauges
              To display previously created gauges, either right-click the storage device and select gauges
              (see Figure 12-19 on page 443) or drag and drop the Gauges item on the storage device
              (see Figure 12-20 on page 444) to open the Performance gauges window, shown here in
              Figure 12-33).




              Figure 12-33 Performance gauges window

              Select one of the gauges and then click Display.

              Gauge Properties
              The Properties button allows the following fields or choices to be modified.

              Performance
              These are the performance related possibilities:
                  Description
                  Metrics
                  Component
                  Data points
                  Date range — date and time ranges




454   IBM TotalStorage Productivity Center V2.3: Getting Started
You can change the data displayed in the gauge from Data points with an active data
collection to Date range (see Figure 12-34). Selecting Date range allows you to choose the
Start date and End Date using the performance data stored in the DB2 database.




Figure 12-34 Performance gauge properties




                   Chapter 12. Using TotalStorage Productivity Center Performance Manager   455
Exception
              You can change the Type property of the gauge definition from Performance to Exception. For
              a gauge type of Exception, you can only choose to view data for a Date range (see
              Figure 12-35).




              Figure 12-35 Exception gauge properties


              Delete a gauge
              To delete a previously created gauge, either right-click the storage device and select gauges
              (see Figure 12-19 on page 443) or drag and drop the Gauges item on the storage device (see
              Figure 12-20 on page 444) to open the Performance gauges window shown in Figure 12-33
              on page 454.

              Select the gauge to remove and click Delete. A pop-up window will prompt for confirmation to
              remove the gauge (see Figure 12-36).




              Figure 12-36 Confirm gauge removal




456   IBM TotalStorage Productivity Center V2.3: Getting Started
To confirm, click Yes and the gauge will be removed. The gauge name may now be reused, if
          desired.


12.1.7 ESS thresholds
          Thresholds are used to determine watermarks for warning and error indicators for an
          assortment of storage metrics, including:
             Disk Utilization
             Cache Holding Time
             NVS Cache Full
             Total I/O Requests

          Thresholds are used either by:
          1. Right-clicking a storage device in the center panel of TotalStorage Productivity Center for
             Disk, and selecting the thresholds menu option (Figure 12-37)
          2. Or, by dragging and dropping the thresholds task from the right tasks panel in Multiple
             Device Manager, onto the desired storage device, to display or modify the thresholds for
             that device




          Figure 12-37 Opening the thresholds panel




                             Chapter 12. Using TotalStorage Productivity Center Performance Manager   457
Upon opening the thresholds submenu, you will see the following display, which shows the
              default thresholds in place for ESS, as shown in Figure 12-38.




              Figure 12-38 Performance Thresholds main panel

              On the right-hand side, there are buttons for Enable, Disable, Copy Threshold Properties,
              Filters, and Properties.

              If the selected task is already enabled, then the Enable button will appear greyed out, as in
              our case. If we attempt to disable a threshold that is currently enabled, by clicking the Disable
              button, a message will be displayed as shown in Figure 12-39.




              Figure 12-39 Disabling threshold warning panel

              You may elect to continue, and disable the selected threshold, or to cancel the operation by
              clicking Don’t disable threshold.




458   IBM TotalStorage Productivity Center V2.3: Getting Started
The copy threshold properties button will allow you to copy existing thresholds to other
devices of similar type (ESS, in our case). The window in Figure 12-40 is displayed.




Figure 12-40 Copying thresholds panel


 Note: As shown in Figure 12-40, the copying threshold panel is aware that we have
 registered on our ESS CIM agent host both clusters of our model 800 ESS, as indicated by
 the semicolon delimited IP address field for the device ID “2105.22219”.

The Filters window is another available thresholds option. From this panel, you can enable,
disable, and modify existing filter values against selected thresholds as shown in
Figure 12-41.




Figure 12-41 Threshold filters panel




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   459
Finally, you can open the properties panel for a selected threshold, and are shown the panel
              in Figure 12-42. You have options to acknowledge the values at their current settings, or
              modify the warning or error levels, or select the alert level (none, warning only, and warning or
              error are the available options).




              Figure 12-42 Threshold properties panel


12.1.8 Data collection for SAN Volume Controller
              Performance Manager uses an integrated configuration assistant tool (ICAT) interface of a
              SAN Volume Controller (SVC) to start and stop performance statistics collection on an SAN
              Volume Controller device.

              The process for performing data collection on SAN Volume Controller is similar to that of ESS.

              You will need to setup a new Performance Data Collection Task for the SAN Volume
              Controller device. Figure 12-43 on page 461 is an example of the panel you should see when
              you drag the Data Collection task onto the SAN Volume Controller device, or right-click the
              device and left-click Data Collection.

              As with the ESS data collection task:
                  Define a task name and description
                  Select sample frequency and duration of the task and click OK

                Note: The SAN Volume Controller can perform data collection at a minimum 15 minute
                interval.

              You may use the Add button to include additional SAN Volume Controller devices in the same
              data collection task, or use the Remove button to exclude SAN Volume Controllers from an
              existing task. In our case we are performing data collection against a single SAN Volume
              Controller.




460   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 12-43 The SVC Performance Data Collection Task

          As long as at least one data collection task has been completed, you are able to proceed with
          the steps to create a gauge to view your performance data.


12.1.9 SAN Volume Controller thresholds
          To view the available Performance Manager Thresholds, you can right-click the SAN Volume
          Controller device and click Thresholds, or drag the Threshold task from the right-hand panel
          onto the SAN Volume Controller device you want to query.

          A panel like the one in Figure 12-44 appears.




          Figure 12-44 The SVC performance thresholds panel

          SVC has following thresholds with their default properties:
             VDisk I/Os rate
             Total number of virtual disk I/Os for each I/O group. SAN Volume Controller defaults:
             – Status: Disabled
             – Warning: None
             – Error: None
             VDisk bytes per second
             Virtual disk bytes per second for each I/O group. SAN Volume Controller defaults:
             – Status: Disabled
             – Warning: None
             – Error: None




                             Chapter 12. Using TotalStorage Productivity Center Performance Manager   461
MDisk I/O rate
                  Total number of managed disk I/Os for each managed disk group. SAN Volume Controller
                  defaults:
                  – Status: Disabled
                  – Warning: None
                  – Error: None
                  MDisk bytes per second
                  Managed disk bytes per second for each managed disk group. SAN Volume Controller
                  defaults:
                  – Status: Disabled
                  – Warning: None
                  – Error: None

              You may only enable a particular threshold once minimum values for warning and error levels
              have been defined. If you attempt to select a threshold and enable it without first modifying
              these values, you will see a notification like the one in Figure 12-45.




              Figure 12-45 SAN Volume Controller threshold enable warning


                Tip: In TotalStorage Productivity Center for Disk, default threshold warning or error values
                of -1.0 are indicators that there is no recommended minimum value for the threshold and
                are therefore entirely user defined. You may elect to provide any reasonable value for these
                thresholds, keeping in mind the workload in your environment.

              To modify the warning and error values for a given threshold, you may select the threshold,
              and click the Properties button. The panel in Figure 12-46 will be shown. You can modify the
              threshold as appropriate, and accept the new values by selecting the OK button.




              Figure 12-46 Modifying threshold warning and error values




462   IBM TotalStorage Productivity Center V2.3: Getting Started
12.1.10 Data collection for the DS6000 and DS8000
           .The process for performing data collection on DS6000/DS8000 is similar to that of ESS.

           You will need to set up a new Performance Data Collection Task for the DS6000/DS8000
           device. Figure 12-47 is an example of the panel you should see when you drag the Data
           Collection task onto the SAN Volume Controller device, or right-click the device and left-click
           Data Collection. Figure 12-48 shows user validation.

           As with the ESS data collection task:
              Define a task name and description
              Select sample frequency and duration of the task and click OK.




           Figure 12-47 DS6000/DS8000 user name and password




           Figure 12-48 The DS6000/DS8000 Data Collection Task

           As long as at least one data collection task has been completed, you are able to proceed with
           the steps to create a gauge to view your performance data (Figure 12-49 on page 464
           through Figure 12-51 on page 466).




                              Chapter 12. Using TotalStorage Productivity Center Performance Manager   463
Figure 12-49 DS6000/DS8000 Cluster level gauge values




464   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 12-50 DS6000/DS8000 Rank Group level gauges




                  Chapter 12. Using TotalStorage Productivity Center Performance Manager   465
Figure 12-51 DS6000/DS8000 Volume level gauges


12.1.11 DS6000 and DS8000 thresholds
              To view the available Performance Manager Thresholds, you can right-click the
              DS6000/Ds8000 device and click Thresholds, or drag the Threshold task from the right-hand
              panel onto the DS6000/DS8000 device you want to query.

              A panel like the one in Figure 12-52 appears.




              Figure 12-52 The DS6000/DS8000 performance thresholds panel




466   IBM TotalStorage Productivity Center V2.3: Getting Started
You may only enable a particular threshold once minimum values for warning and error levels
        have been defined. If you attempt to select a threshold and enable it without first modifying
        these values, you will see a notification like the one in Figure 12-45 on page 462.




        Figure 12-53 DS6000/DS8000 threshold enable warning


         Tip: In TotalStorage Productivity Center for Disk, default threshold warning or error values
         of -1.0 are indicators that there is no recommended minimum value for the threshold and
         are therefore entirely user defined. You may elect to provide any reasonable value for these
         thresholds, keeping in mind the workload in your environment.

        To modify the warning and error values for a given threshold, you may select the threshold,
        and click the Properties button. The panel in Figure 12-46 on page 462 will be shown. You
        can modify the threshold as appropriate, and accept the new values by clicking the OK
        button.




        Figure 12-54 Modifying DS6000/DS8000 threshold warning and error values



12.2 Exploiting gauges
        Gauges are a very useful tool and help in identifying performance bottlenecks. In this section
        we show the drill down capabilities of gauges. The purpose of this section is not to cover
        performance analysis in detail for a specific product, but to highlight capabilities of the tool.
        You may adopt and use a similar approach for the performance analysis.




                           Chapter 12. Using TotalStorage Productivity Center Performance Manager    467
12.2.1 Before you begin
              Before you begin with customizing gauges, ensure that enough correct samples of data are
              collected in the performance database. This is true for any performance analysis.

              The data samples you collect must cover an appropriate time period that corresponds with
              high/low instances of the I/O workload. Also, the samples should cover sufficient iterations of
              the peak activity to perform analysis over a period of time. This is true for analyzing a pattern.
              You may use the advanced scheduler function of IBM Director to configure a repetitive task.

              If you plan to perform analysis for one specific instance of activity, then you can ensure that
              the performance data collection task covers the specific time period.


12.2.2 Creating gauges: an example
              In this example, we will cover creation and customization of gauges for ESS.

              First of all, we scheduled an ESS performance data collection task at every 3-hour interval
              using the IBM Director scheduler function for 8 days. For details on using the IBM Director
              scheduler, refer to 12.1.3, “Using IBM Director Scheduler function” on page 435.

              For creating the gauge, we launched the Performance gauges panel as shown in
              Figure 12-55, by right-clicking the ESS device.




              Figure 12-55 Gauges panel




468   IBM TotalStorage Productivity Center V2.3: Getting Started
Click the Create button to create a new gauge. You will see a panel similar to Figure 12-56.




Figure 12-56 Create performance gauge

We selected Cluster in the top left corner, Total I/O Rate metric in the metrics box, and
Cluster 1 in the component box. Also, we entered the following parameters:
   Name: 22219P_drilldown_analysis
   Description: Eiderdown analysis for 22219 ESS




                   Chapter 12. Using TotalStorage Productivity Center Performance Manager   469
For the Date range, we selected our historical data collection sampling period and clicked
              Display gauge. Upon clicking OK, we got the next panel as shown in Figure 12-57.




              Figure 12-57 Gauge for ESS 22219 Cluster performance




470   IBM TotalStorage Productivity Center V2.3: Getting Started
12.2.3 Zooming in on the specific time period
           The previous chart shows some peaks of high cluster I/O rate between the period from April
           6th to 8th. We decided to zoom into the peak activity and hence selected a more narrow time
           period as shown in Figure 12-58 and clicked the Apply button.




           Figure 12-58 Zooming on specific time period for Total IO rate metric


12.2.4 Modify gauge to view array level metrics
           For the next chart, we decided to have an array level metric for the same time period as
           before. Hence, we selected the gauge that we created earlier and clicked Properties as
           shown in Figure 12-59.




           Figure 12-59 Properties for a defined gauge




                               Chapter 12. Using TotalStorage Productivity Center Performance Manager   471
The subsequent panel is shown in Figure 12-60. We selected Array level metric for Cluster 1,
              Device Adapter 1, Loop A, and Disk Group 2 for Avg. Response time as circled in
              Figure 12-60.




              Figure 12-60 Customizing gauge for array level metric




472   IBM TotalStorage Productivity Center V2.3: Getting Started
The resultant chart is shown in Figure 12-61.




Figure 12-61 Modified gauge with Avg. response time chart




                   Chapter 12. Using TotalStorage Productivity Center Performance Manager   473
12.2.5 Modify gauge to review multiple metrics in same chart
              Next, we decided to review Total I/O, read/sec and writes/sec in the same chart for
              comparison purpose. We selected these three metrics in the Gauge properties panel and
              clicked the Apply button. The resultant chart is shown in Figure 12-62.

                Tip: For selecting multiple metrics in the same chart, click the first metric, hold the shift
                key, and click the last metric. If the metrics you plan to choose are not in the continuous
                list, but are separated, then hold the control key instead of the shift key.




              Figure 12-62 Viewing multiple metrics in the same chart

              The chart Writes and Total I/O are shown as overlapping and Reads are shown as zero.

                Tip: If you select multiple metrics that do not have the same units for the y-axis, then the
                error is displayed as shown in Figure 12-63.




              Figure 12-63 Error displayed if there are no common units


474   IBM TotalStorage Productivity Center V2.3: Getting Started
12.3 Performance Manager command line interface
          The Performance Manager module includes a command line interface known as ‘perfcli’,
          located in the directory c:Program FilesIBMmdmpmpmcli.

          In its present release, the perfcli utility includes support for ESS and SAN Volume Controller
          data collection task creation and management (starting and stopping data collection tasks).
          There are also executables that support viewing and management of task filters, alert
          thresholds, and gauges.

          There is detailed help available at the command line, with information about syntax and
          specific examples of usage.


12.3.1 Performance Manager CLI commands
          The Performance Manager Command Line Interface (perfcli) includes the following
          commands shown in Figure 12-64.




          Figure 12-64 Directory listing of the perfcli commands

             startesscollection/startsvccollection: These commands are used to build and run
             data collection against the ESS or SAN Volume Controller, respectively.
             lscollection: This command is used to list the running, aborted, or finished data
             collection tasks on the Performance Management server.
             stopcollection: This command may be used to stop data collection against a specified
             task name.
             lsgauge: You can use the lsgauge command to display a list of existing gauge names,
             types, device types, device IDs, modified dates, and description information.
             rmgauge: Use this command to remove existing gauges.
             showgauge: This command is used to display performance data output using an existing
             defined gauge.
             setessthresh/setsvcthresh: These two commands are respectively used to set ESS and
             SAN Volume Controller performance thresholds.
             cpthresh: You can use the cpthresh command to copy threshold properties from one
             selected device to one or more other devices.
             setfilter: You can use setfilter to set or change the existing threshold filters.
             lsfilter: This command may be used to display the threshold filter settings for all
             devices specified.




                              Chapter 12. Using TotalStorage Productivity Center Performance Manager   475
setoutput: This command may be used to view or modify the existing data collection
                  output formats, including settings for paging, row printing, format (default, XML, or
                  character delimited), header printing, and output verbosity.
                  lsdev: This command can be used list the storage devices that are used by TotalStorage
                  Productivity Center for Disk.
                  lslun: This command can be used list the LUNs or Performance Manager volumes
                  associated with storage devices.
                  lsthreshold: This command can be used to list the threshold status associated with
                  storage devices.
                  lsgauge: This command can be used list the existing gauge names, gauge type, device
                  name, device ID, date modified, and optionally device information.
                  showgauge: Use this command to display performance output by triggering an existing
                  gauge.
                  showcapacity: This command displays managed capacity, the sum of managed capacity
                  by device type, and the total of all ESS and SAN Volume Controller managed storage.
                  showdbinfo: This command displays percent full, used space, and free space of the
                  Performance Manager database.
                  lsprofile: Use this command to display Volume Performance Advisor profiles.
                  cpprofile: Use this command to copy Volume Performance Advisor profiles.
                  mkprofile: Use this command to create a workload profile that you can use later with
                  mkrecom command to create a performance recommendation for ESS volume allocation.
                  mkreom: Use this command to generate and, optionally, apply a performance LUN advisor
                  recommendation for ESS volumes.
                  lsdbpurge: This command can be used to display the status of database purge tasks
                  running in TotalStorage Productivity Center for Disk.
                  tracklun: This command can be used to obtain historical performance statistics used to
                  create a profile.
                  startdbpurge: Use this command to start a database purge task.
                  showdev: Use this command to display device properties.
                  setoutput: This command sets output format for the administrative command line
                  interface.
                  cpthresh: This command can be used to copy threshold properties from one device to
                  other devices that are of the same type.
                  rmprofile: Use this command to remove delete performance LUN advisor profiles.




476   IBM TotalStorage Productivity Center V2.3: Getting Started
12.3.2 Sample command outputs
          We show some sample commands in Figure 12-65. This sample shows how to invoke
          perfcli commands from the Windows command line interface.




          Figure 12-65 Sample perfcli command from Windows command line interface

          Figure 12-66 and Figure 12-67 show perfcli sample commands within the perfcli tool.




          Figure 12-66 perfcli sample command within perfcli tool




          Figure 12-67 perfcli lslun sample command within perfcli tool




                              Chapter 12. Using TotalStorage Productivity Center Performance Manager   477
12.4 Volume Performance Advisor (VPA)
              The Volume Performance Advisor (VPA) is designed to be an expert advisor that
              recommends allocations for storage space based on considerations of the size of the request,
              an estimate of the performance requirement and type of workload, as well as the existing load
              on an ESS that might compete with the new request. The Volume Performance Advisor will
              then make a recommendation as to the number and size of Logical Unit Numbers (logical
              volumes or LUNs) to allocate, and a location within the ESS which is a good placement with
              respect to the defined performance considerations. The user is given the option of
              implementing the recommendation (allocating the storage), or obtaining subsequent
              recommendations.


12.4.1 VPA introduction
              Data placement within a large, complex storage subsystem has long been recognized as a
              storage and performance management issue. Performance may suffer if done casually or
              carelessly. It can also be costly to discover and correct those performance problems, adding
              to the total cost of ownership.

              Performance Manager is designed to contain an automated approach for storage allocation
              through the functions of a storage performance advisor. It is called the Volume Performance
              Advisor (VPA). The advisor is designed to automate decisions that could be achieved by an
              expert storage analyst given the time and sufficient information. The goal is to give very good
              advice by allowing VPA to consider the same factors that an administrator would in deciding
              where to best allocate storage.

                Note: At this point in time, the VPA tool is available for IBM ESS only.


12.4.2 The provisioning challenge
              You want to allocate a specific amount of storage to run a particular workload. You could be a
              storage administrator interacting through a user interface, or the user could be another
              system component (such as a SAN management product, file system, DataBase
              Management System (DBMS), or logical volume manager) interacting with the VPA
              Application Programming Interface (API).

              A storage request is satisfied by selecting some number of logical volumes (Logical Unit
              Numbers (LUNs). For example, if you ask for 400GB of storage, then a low I/O rate,
              cache-friendly workload could be handled on a single 400GB logical disk residing on a single
              disk array; whereas a cache-unfriendly, high-bandwidth application might need several logical
              volumes allocated across multiple disk arrays, using LVM, file system, or database striping to
              achieve the required performance.

              The performance of those logical disks depends on their placement on physical storage, and
              what other applications might be sharing the arrays. The job of the Volume Performance
              Advisor (VPA) is to select an appropriate set (number and placement) of logical disks that:
                  Consider the performance requirements of the new workload
                  Balance the workload across the physical resources
                  Consider the effects of the other workloads competing for the resources




478   IBM TotalStorage Productivity Center V2.3: Getting Started
Storage administrators and application developers need tools that pull together all the
           components of the decision process used for provisioning storage. They need tools to
           characterize and manage workload profiles. They need tools to monitor existing performance,
           and tools to help them understand the impact of future workloads on current performance.
           What they need is a tool that automates this entire process, which is what VPA for ESS does.


12.4.3 Workload characterization and workload profiles
           Intelligent data placement requires a rudimentary understanding of the application workload,
           and the demand likely to be placed on the storage system. For example, cache-unfriendly
           workloads with high I/O intensity require a larger number of physical disks than cache-friendly
           or lightweight workloads. To account for this, the VPA requires specific workload descriptions
           to drive its decision-making process. These workload descriptions are precise, indicating I/O
           intensity rates; percentages of read, write, random, and sequential content; cache
           information; and transfer sizes. This workload-based approach is designed to allow the VPA
           to correctly match performance attributes of the storage with the workload attributes with a
           high degree of accuracy. For example, high random-write content workloads might best be
           pointed to RAID10 storage. High cache hit ratio environments can probably be satisfied with
           fewer numbers of logical disks.

           Most users have little experience or capability for specifying detailed workload characteristics.
           The VPA is designed to deal with this problem in three ways:
              Predefined workload definitions based on characterizations of environments across
              various industries and applications. They include standard OLTP type workloads, such as
              “OLTP High”, and “Batch Sequential”.
              Capturing existing workloads by observing storage access patterns in the environment.
              The VPA allows the user to point to a grouping of volumes and a particular window of time,
              and create a workload profile based on the observed behavior of those volumes.
              Creation of hypothetical workloads that are similar to existing profiles, but differ in some
              specific metrics.

           The VPA has tools to manage a library of predefined and custom workloads, to create new
           workload profiles, and to modify profiles for specific purposes.


12.4.4 Workload profile values
           It is possible to change many specific values in the workload profile. For example, the access
           density may be high because a test workload used small files. It can be adjusted to a more
           accurate number. Average transfer size always defaults to 8KB, and should be modified if
           other information is available for the actual transfer size.

           The peak activity information should also be adjusted. It defaults to the time when the profile
           workload was measured. In an existing environment it should specify the time period for
           contention analysis between existing workloads and the new workload. Figure 12-68 on
           page 480 shows a user defined VPA workload profile.




                              Chapter 12. Using TotalStorage Productivity Center Performance Manager    479
Figure 12-68 User defined workload profile details example


12.4.5 How the Volume Performance Advisor makes decisions
              As mentioned previously, the VPA is designed to take several factors into account when
              recommending volume allocation:
                  Total amount of space required
                  Minimum and maximum number of volumes, and sizes of volumes
                  Workload requirements
                  Contention from other workloads
              VPA tries to allocate volumes on the least busy resources, at the same time balancing
              workload across available resources. It uses the workload profile to estimate how busy
              internal ESS resources will become if that workload is allocated on those resources. So it
              estimates how busy the raid arrays, disk adapters, and controllers will become. The workload
              profile is very important in making that decision.
              For example, cache hit ratios affect the activity on the disk adapters and raid arrays. When
              creating a workload profile from existing data, it's important for you to pick a representative
              time sample to analyze. Also you should examine the IO/sec per GB. Many applications have
              access density in the range of 0.1 to 3.0. If it is significantly outside this range, then this might
              not be an appropriate sample.

480   IBM TotalStorage Productivity Center V2.3: Getting Started
The VPA will tend to utilize resources that can best accommodate a particular type of
           workload. For example, high write content will make Raid 5 arrays busier than RAID 10 and
           VPA will therefore bias to RAID 10. Faster devices will be less busy, so VPA biases allocations
           to the faster devices. VPA also analyzes the historical data to determine how busy the internal
           ESS components (arrays, disk adapters, clusters) are due to other workloads. In this way,
           VPA tries to avoid allocating on already busy ESS components.

           If VPA has a choice among several places to allocate volumes, and they appear to be about
           equal, it is designed to apply a randomizing factor. This keeps the advisor from always giving
           the same advice, which might cause certain resources to be overloaded if everyone followed
           that advice. This also means that several usages of VPA by the same user may not
           necessarily get the same advice, even if the workload profiles are identical.

            Note: VPA tries to allocate the fewest possible volumes, as long as it can allocate on low
            utilization components. If the components look too busy, it will allocate more (smaller)
            volumes as a way of spreading the workload.It will not recommend more volumes than the
            maximum specified by the user.

           VPA may however be required to recommend allocation on very busy components. A
           utilization indicator in the user panels will indicate whether allocations would cause
           components to become heavily utilized. The I/O demand specified in the workload profile for
           the new storage being allocated is not a Service Level Agreement (SLA). In other words,
           there is no guarantee that the new storage, once allocated, will perform at or above the
           specified access density. The VPA will make recommendations unless the available space on
           the target devices is exhausted.

           An invocation of VPA can be used for multiple recommendations. To handle a situation when
           multiple sets of volumes are to be allocated with different workload profiles, it is important that
           the same VPA wizard be used for all sets of recommendations. Select Make additional
           recommendations on the View Recommendations page, as opposed to starting a
           completely new sequence for each separate set of volumes to be allocated. VPA is designed
           to remember each additional (hypothetical) workload when making additional
           recommendations.

           There are, of course, limitations to the use of an expert advisor such as VPA. There may well
           be other constraints (like source and target Flashcopy requirements), which must be
           considered. Sometimes these constraints can be accommodated with careful use of the tool,
           and sometimes they may be so severe that the tool must be used very carefully. That is why
           VPA is designed as an advisor.

           In summary, the Volume Performance Advisor (VPA) provides you a tool to help automate
           complex decisions involved in data placement and provisioning. It short, it represents a future
           direction of storage management software! Computers should monitor their resources and
           make autonomic adjustments based on the information. The VPA is an expert advisor which
           provides you a step in that direction.


12.4.6 Enabling the Trace Logging for Director GUI Interface
           Enabling GUI logging can be a useful for troubleshooting GUI problems, however unlikely they
           may occur, which you may encounter while using VPA. Since this function requires a server
           reboot where TotalStorage Productivity Center for Disk is running, you may consider doing
           this prior to engaging in use of the VPA.




                               Chapter 12. Using TotalStorage Productivity Center Performance Manager     481
On the Windows platform, follow these steps:
              1. Start → Run regedit.exe
              2. Open the HKEY_LOCAL_MACHINE →SOFTWARE →Tivoli →Director →CurrentVersion
                 file.
              3. Modify the LogOutput. Set the value to be equal to 1.
              4. Reboot the server

              The output log location from the instructions above is X:/program files/ibm/director/log
              (where X is the drive where the Director application was installed). The log file for the Director
              is com.tivoli.console.ConsoleLauncher.stderr.

              On the Linux platform, TWGRas.properties sets the output logging on. You need to remove
              the comment from the last line in the file (twg.sysout=1) and ensure that you have set
              TWG_DEBUG_CONSOLE as an environment variable.
                  For example in bash: $export TWG_DEBUG_CONSOLE=true


12.4.7 Getting started
              In this section, we provide detailed steps of using VPA with pre-defined performance
              parameters (workload profile) you can utilize for advice in optimal volume placement in your
              environment.

              For detailed steps on creating customized workload profiles, you may refer to 12.4.8,
              “Creating and managing workload profiles” on page 508.

              To use VPA with customized workload profile, the major steps are:
                  Create a data collection task in Performance Manager
                  In order to utilize the VPA, you must first have a useful amount of performance data
                  collected from the device you want to examine. Refer to “Performance Manager data
                  collection” on page 429 for more detailed instructions regarding use of the Performance
                  data collection feature of the Performance Manager.
                  Schedule and run a successful performance data collection task
                  It is important to have an adequate amount of historical to provide you a statistically
                  relevant sampling population.
                  Create or use a user-defined workload profile
                  Use the Volume Performance Advisor to:
                  –   Add Devices
                  –   Specify Settings
                  –   Select workload profile (predefined or user defined)
                  –   View Profile Details
                  –   Choose Candidate Location
                  –   Verify Settings
                  –   Approve Recommendations or restart VPA process with different parameters)

              Workload profiles
              The basic VPA concept, and the storage administrator’s goal, is to balance the workload
              across all device components. This requires detailed ESS configuration information including
              all components (clusters, device adapters, logical subsystems, ranks, and volumes)




482   IBM TotalStorage Productivity Center V2.3: Getting Started
To express the workload represented by the new volumes, they are assigned a workload
profile. A workload profile contains various performance attributes:
   I/O demand, in I/O operations per second per GB of volume size
   Average transfer size, in KBs per second
   Percentage mix of I/O - sequential or random, and read or write
   Cache utilization - percent of: cache hits for random reads, cache misses for random
   writes
   Peak activity time - the time period when the workload is most active

You can create your own workload profile definitions in two ways
   By copying existing profiles, and editing their attributes
   By performing an analysis of existing volumes in the environment
   This second option is known as a Workload Analysis. You may select one or more
   existing volumes, and the historical performance data for these volumes retrieved, to
   determine their (average) performance behavior over time.

Using VPA with pre-defined workload profile
This section describes a VPA example using a default workload profile. The purpose of this
section to help you familiarize for using VPA tool.

Although, it is recommended to generate and use your customized workload profile after
gathering performance data. The customized profile will be realistic in terms of your
application performance requirements.

The VPA provides five predefined (canned) Workload Profile definitions. They are:
1. OLTP Standard: for general Online Transaction Processing Environment (OLTP)
2. OLTP High: for higher demand OLTP applications
3. Data Warehouse: for data warehousing applications
4. Batch Sequential: for batch applications accessing data sequentially
5. Document Archival: for archival applications, write-once, read-infrequently

    Note: Online Transaction Processing (OLTP) is a type of program that facilitates and
    manages transaction-oriented applications. OLTP is frequently used for data entry and
    retrieval transactions in a number of industries, including banking, airlines, mail order,
    supermarkets, and manufacturers. Probably the most widely installed OLTP product is
    IBM's Customer Information Control System (CICS®).


Launching VPA tool
The steps to utilize a default workload profile to have the Volume Performance Advisor
examine and advise you on volume placement are:
1. In the IBM Director Task pane, click Multiple Device Manager.
2. Click Manage Performance
3. Click Volume Performance Advisor




                   Chapter 12. Using TotalStorage Productivity Center Performance Manager   483
4. You can choose two methods to launch VPA:
                  a. “Drag and Drop” the VPA icon to the storage device to be examined (see Figure 12-69).




              Figure 12-69 “Drag and Drop” the VPA icon to the storage device




484   IBM TotalStorage Productivity Center V2.3: Getting Started
b. Select storage device → right-click the device → select Volume Performance Advisor
      (see Figure 12-70).




Figure 12-70 Select ESS and right-click for VPA

If a storage device is selected for the drag and drop step, that is not in the scope of the VPA,
the following message will open (see Figure 12-71). Devices such as a CIMOM or an SNMP
device will generate this error. Only ESS is supported at this time.




Figure 12-71 Error launching VPA example


ESS User Validation
If this is the first time your are using VPA tool for the selected ESS device, then the ESS User
Validation panel will display as shown in Figure 12-72 on page 486. Otherwise, if you have
already validated the ESS user for VPA usage, then it will skip this panel and it will launch the
VPA setting default panel as shown in Figure 12-77 on page 488.




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   485
Figure 12-72 ESS User validation screen example

                  In the ESS User Validation panel, specify the user name, password, and port for each of
                  the IBM TotalStorage Enterprise Storage Servers (ESSs) that you want to examine.
                  During the initial setup of the VPA, on the ESS User Validation window, you need to first
                  select the ESS (as shown in Figure 12-73 on page 487) and then input correct username,
                  correct password and password verification.
                  You must click Set after you have input the correct username, password, and password
                  verification in the appropriate fields (see highlighted portion with circle in Figure 12-74 on
                  page 487). When you click Set, the application will populate the data you input (masked)
                  into the correct Device Information fields in the Device Information box (see Figure 12-75
                  on page 487).
                  If you do not click Set, before selecting OK, the following error(s) will appear depending on
                  what data needs to be entered.
                  BWN005921E (ESS Specialist username has not been entered correctly or applied)
                  BWN005922E (ESS Specialist password has not been entered correctly or applied)
                  If you encounter these errors, ensure you have correctly input the values in the input fields
                  in the lower part of the ESS user validation window and then retry by clicking OK.
                  The ESS user validation window contains the following fields:
                  – Devices table - Select an ESS from this table. It includes device IDs and device IP
                    addresses of the ESS devices on which this task was dropped.
                  – ESS Specialist username - Type a valid ESS Specialist user name and password for
                    the selected ESS. Subsequent displays of the same information for this ESS show the
                    user name and password that was entered. You can change the user name by
                    entering a new user name in this field.
                  – ESS Specialist password - Type a valid ESS Specialist password for the selected
                    ESS. Any existing password entries are removed when you change the ESS user
                    name.
                  – Confirm password - Type the valid ESS Specialist password again exactly as you
                    typed it in the password field.
                  – ESS Specialist port - Type a valid ESS port number. The default is 80.
                  – Set button - Click to set names, passwords, and ports without closing the panel.
                  – Remove button - Click to remove the selected information.
                  – Add button - Click to invoke the Add devices panel.


486   IBM TotalStorage Productivity Center V2.3: Getting Started
– OK button - Click to save the changes and close the panel.




Figure 12-73 ESS User validation - select ESS




Figure 12-74 Apply ESS Specialist user defined input




Figure 12-75 Applied ESS Specialist user defined input




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   487
Click the OK button to save the changes and close the panel. The application will attempt
                  to access the ESS storage device.
                  The error message in Figure 12-76 can be indicative of use of an incorrect username or
                  password for authentication. Additionally, If you have a firewall and are not adequately
                  authenticating to the storage device, the error may appear. If this does occur, check to
                  ensure you are using the correct username and password for the authentication and have
                  firewall access and are properly authenticating to establish storage device connectivity.




              Figure 12-76 Authentication error example


              Configuring VPA settings for the ESS diskspace request
              After you have successfully completed the User Validation step, the VPA Settings window
              will open (see Figure 12-77).




              Figure 12-77 VPA Settings default panel




488   IBM TotalStorage Productivity Center V2.3: Getting Started
You use the Volume performance advisor - Settings window to identify your requirements
   for host attachment and the total amount of space that you need. You can also use this
   panel to specify volume number and size constraints, if any. We will begin with our
   example as shown in Figure 12-78.




Figure 12-78 VPA settings for example




                   Chapter 12. Using TotalStorage Productivity Center Performance Manager   489
Here we describe the fields in this window:
                  – Total space required (GB) - Type the total space required in gigabytes. The smallest
                    allowed value is 0.1 GB. We requested 3 GB for our example.

                Note: You cannot exceed the volume space available for examination on the server(s) you
                select. To show the error, in this example we selected host Zombie and Total required
                space as 400 GB. We got the error shown in Figure 12-79.
                   Action: Retry with different values and look at the server log for details.
                   Solution(s):
                    – Select a smaller maximum Total (volume) Space required GB and retry this step.
                    – Select more hosts which will include adequate volume space for this task.
                    – You may want to select the box entitled Consider volumes that have already been
                      allocated but not assigned in the performance recommendation.

                Director log file enabling will generate logs for troubleshooting Director GUI components,
                including the PM coswearer. In this example, the file we reference is:
                   com.tivoli.console.ConsoleLauncher.stderr.
                   (com.tivoli.console.ConsoleLauncher.stdout is also useful)

                The sample log is shown in Figure 12-80.




              Figure 12-79 Error showing exceeded the space requested




              Figure 12-80 Director GUI console errorlog


                  – Specify a volume size range button - Click the button to activate the field, then use
                    the Minimum size (GB) spinner and the Maximum size (GB) spinner to specify the
                    range. In this example, we selected 1 GB as minimum and 3 GB as maximum.




490   IBM TotalStorage Productivity Center V2.3: Getting Started
– Specify a volume quantity range button - Click the button to activate the field, then
     use the Minimum number spinner and the Maximum number spinner to specify the
     range.
   – Consider volumes that have already been allocated but not assigned to hosts in
     the performance recommendation. If you check this box, VPA will use these types of
     volumes in the volume performance examination process.
      When this box (Consider volumes...) is checked and you click Next, the VPA wizard
      will open the following warning window (see Figure 12-81).




Figure 12-81 Consider volumes - warning window example


 Note: The BWN005996W message is a warning (W).

 You have selected to reuse unassigned existing volumes which could potentially cause
 data loss. Go Back to the VPA Settings window by clicking OK if you do not want to
 consider unassigned volumes. Click the Help button for more information.

 Explanation: The Volume Performance Advisor will assume that all currently unassigned
 volumes are not in use, and may recommend the reuse of these volumes. If any of these
 unassigned volumes are in use — for example, as replication targets or other data
 replication purposes — and these volumes are recommended for reuse, the result could
 be potential data loss.

 Action: Go back to the Settings window and unselect Consider volumes that have already
 been allocated but not assigned to hosts in the performance recommendation if you do not
 want to consider volumes that may potentially be used for other purposes. If you want to
 continue to consider unassigned volumes in your recommendations, then continue.

   – Host Attachments table - Select one or more hosts from this table. This table lists all
     hosts (by device ID) known to the ESS that you selected for this task.
      It is important to only choose hosts for volume consideration that are the same server
      type. It is also important to note that the VPA takes into consideration the maximum
      volume limitations of server type such as (Windows 256 volumes maximum) and AIX
      (approximately 4000 volumes). If you select a volume range above the server limit,
      VPA will display an error. In our example we used the host “Zombie”.
   – Next button - Click to invoke the Choose workload profile window. You use this
     window to select a workload profile from a list of existing profile templates.




                  Chapter 12. Using TotalStorage Productivity Center Performance Manager   491
5. Click Next, after inputting your preferred parameters, and the Choose workload profile
                 window will display (see Figure 12-82).




              Figure 12-82 VPA Choose workload profile window example


              Choosing a workload profile
                  You can use the Choose workload profile window to select a workload profile from a list
                  of existing profiles. The Volume performance advisor uses the workload profile and other
                  performance information to advise you about where volumes should be created. For our
                  example we have selected the OLTP Standard default profile type.
                  – Workload profiles table - Select a profile from this table to view or modify. The table
                    lists predefined or existing workload profile names and descriptions. Predefined
                    workload profiles are shipped with Performance Manager. Workload profiles that you
                    previously created, if any, are also listed.
                  – Manage profiles button - Click to invoke the Manage workload profile panel.
                  – Profile details button - Click to see details about the selected profile in the Profile
                    details panel as shown in Figure 12-83 on page 493. Details include the following types
                    of information:
                      •   Total I/O per second per GB
                      •   Random read cache hits
                      •   Sequential and random reads and writes
                      •   Start and end dates
                      •   Duration (days)



492   IBM TotalStorage Productivity Center V2.3: Getting Started
Note: You cannot modify the properties of the workload profile from this panel. The panel
 options are “greyed out” (inactive). You can make changes to a workload profile from
 Manage Profile → Create like panel.

   – Next button - Click to invoke the Choose candidate locations window. You can use
     this panel to select volume locations for the VPA to consider.




Figure 12-83 Properties for OLTP Standard profile

6. After reviewing the properties for predefined workload profiles, you may select a workload
   profile from the table which closely resemble your workload profile requirements. For our
   scenario, we have selected the OLTP Standard workload name from the Choose
   workload profile window. We are going to use this workload profile for the LUN
   placement recommendations.
   – Name - Shows the default profile name. The following restrictions apply to the profile
     name.
       •   The workload profile name must be between 1 to 64 characters.
       •   Legal characters are A-Z, a-z, 0-9, “-”, “_”, “.”, and “:”
       •   The first character cannot be “-” or “_”.


                     Chapter 12. Using TotalStorage Productivity Center Performance Manager   493
•   Spaces are not acceptable characters.
                  – Description - Shows the description of workload profile.
                  – Total I/O per second per GB - Shows the values for the selected workload profile
                    Total I/O per second rate.
                  – Average transfer size (KB) - Shows the values for the selected workload profile.
                  – Caching information box - Shows the cache hits and destage percentages:
                      •   Random read cache hits
                          Range from 1 - 100%. The default is 40%.
                      •   Random write destage
                          Range from 1 - 100%. The default is 33%.
                  – Read/Write information box - Shows the read and write values.
                     The percentages for the four fields must equal 100%
                      •   Sequential reads - The default is 14%.
                      •   Sequential writes - The default is 23%.
                      •   Random reads - The default is 36%.
                      •   Random writes - The default is 32%.
                  – Peak activity information box
                     Since currently we are only viewing properties of an existing profile, the parameters for
                     this box are not selectable. But as reference for subsequent usage, you may review
                     this box. After you review properties for this box, you may click the Close button.
                     While creating new profile, this box will allow you to input following parameters:
                      •   Use all available performance data radio button. You can select this option if you
                          want to include all available performance data previously collected in consideration
                          for this workload profile.
                      •   Use the specified peak activity period radio button. You can select this button as
                          an alternate option (instead of using the Use all available performance data
                          option) for consideration in this workload profile definition.
                      •   Time setting drop down menu. Select from the following options for the time setting
                          you want to use for this workload profile.
                          - Device time
                          - Client time
                          - Server time
                          - GMT
                      •   Past days to analyze spinner. Use this (or manually enter the number) to select
                          the number of days of historical information you want to consider for this workload
                          profile analysis
                      •   Time Range drop down lists. Select the Start time and End time to consider using
                          the appropriate fields.
                  – Close button - Click to close the panel. You will be returned to the Choose workload
                    profile window.




494   IBM TotalStorage Productivity Center V2.3: Getting Started
Choosing candidate locations
Select the name of the profile you want to use from the VPA Choose workload profile
window and then the Choose Candidate Locations window will open (see Figure 12-84).
We chose our OTLP Standard workload profile for the VPA analysis.




Figure 12-84 Choose candidate locations window

   You can use the Choose candidate locations page to select volume locations for the
   performance advisor to consider. You can choose to either include or exclude the selected
   locations for the advisor's consideration.
   The VPA uses historical performance information to advise you about where volumes
   should be created. The Choose candidate locations page is one of the panels the
   performance advisor uses to collect and evaluate the information.
   – Device list - Displays device IDs or names for each ESS on which the task was
     activated (each ESS on which you dropped the Volume advisor icon).
   – Component Type tree - When you select a device from the Device list, the selection
     tree opens on the left side of the panel.
      The ESS component levels are shown in the tree. The following objects might be
      included:
      •   ESS
      •   cluster
      •   device adapter


                   Chapter 12. Using TotalStorage Productivity Center Performance Manager   495
•   array
                      •   disk group
                     The component level names are followed by information about the capacity and the
                     disk utilization of the component level. For example, we used System component level.
                     It shows Component ID - 2105-F20-16603,Type- System, Description -
                     2105-F20-16603-IBM, Available capacity - 311GB, Utilization - Low.(see
                     Figure 12-84 on page 495).

                Tip: You can select the different ESS component types and the VPA will reconsider the
                volume placement advise based on that particular select. To familiarize yourself with the
                options, select each component in turn to determine which component type centric advise
                you prefer before proceeding to the next step.

                  Select a component type from the tree to display a list of the available volumes for that
                  component in the Candidates table (see Figure 12-84 on page 495). We chose system
                  for this example. It represents entire ESS system in this case.
                  Click Add button to add the component selected in the Candidates table to the Selected
                  candidates table. See Figure 12-85. It shows Selected candidate as 2105-F20-16603.




              Figure 12-85 VPA Chose candidate locations Component Type tree example (system)




496   IBM TotalStorage Productivity Center V2.3: Getting Started
Verify settings for VPA
   Click the Next button to invoke the Verify Settings window (see Figure 12-86).




Figure 12-86 VPA Verify settings window example

   You can use the Verify settings panel to verify the volume settings that you specified in
   the previous panels of the VPA.




                   Chapter 12. Using TotalStorage Productivity Center Performance Manager   497
Approve recommendations
                  After you have successfully completed the Verify Settings step, click the Next button,
                  and the Approve Recommendations window opens (see Figure 12-87).




              Figure 12-87 VPA Recommendations window example

                  You use the Recommendations window to first view the recommendations from the VPA
                  and then to create new volumes based on the recommendations.
                  In this example, VPA also recommends the location of volume as 16603:2:4:1:1700 in the
                  Component ID column. This means recommended volume location is at ESS with ID
                  16603, Cluster 2, Device Adapter 4, Array 1 and volume ID1700.
                  With this information, it is also possible to create volume manually via ESS specialist
                  Browser interface or use VPA to create the same.
                  In the Recommendations window of the wizard, you can choose whether the
                  recommendations are to be implemented, and whether to loop around for another set of
                  recommendations. At this time, you have two options (other than to cancel the operation).
                  Make your final selection to Finish or return to the VPA for further recommendations.
                  a. If you do not want to assign the volumes using the current VPA advice, or want the
                     VPA to make another recommendation, check only the Make Additional
                     Recommendations box.

498   IBM TotalStorage Productivity Center V2.3: Getting Started
b. If you want to use the current VPA recommendation and make additional volume
      assistants at this time, select both the Implement Recommendations and Make
      Additional Recommendations check boxes. If you choose both options, you must
      first wait until the current set of volume recommendations are created, or created and
      assigned, before continuing. If you make this type of selection, a secondary window
      will appear which runs synchronously within the VPA.

 Tip: Stay in the same VPA session if you are going to implement volumes and add new
 volumes. This will enable VPA to provide advice for your current selections, checking for
 previous assignments, and verifying that no other VPA is processing the same volumes.


VPA loopback after Implement Recommendations selected
In the following example, we show the results of a VPA session.
1. In this example, we decided to Implement recommendations and also Make additional
   recommendations. Hence we selected both check boxes (see Figure 12-88).




Figure 12-88 VPA Recommendation selected check box




                  Chapter 12. Using TotalStorage Productivity Center Performance Manager     499
2. Click the Continue button to proceed with VPA advice (see Figure 12-88 on page 499).




              Figure 12-89 VPA results - in progress panel

              3. In Figure 12-89, we can see that the volumes are being created on the server we selected
                 previously. This process takes a little time, so be patient.
              4. Figure 12-90 indicates that the volume creation and assigning to ESS has completed. Be
                 patient and momentarily, the VPA loopback sequence will continue.




              Figure 12-90 VPA final results




500   IBM TotalStorage Productivity Center V2.3: Getting Started
5. After the volume creation step has successfully completed, the following Settings window
   will again open so that you may add more volumes (see Figure 12-91).




Figure 12-91 VPA settings default




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   501
For the additional recommendations, we decided to use same server. But, we specified the
              Volume quantity range instead of Volume size range for the requested space of 2GB. See
              Figure 12-92.




              Figure 12-92 VPA additional space request




502   IBM TotalStorage Productivity Center V2.3: Getting Started
After clicking Next, the Choose Profile Panels opens. We selected the same profile as before:
OLTP Standard. See Figure 12-93.




Figure 12-93 Choose Profile




                   Chapter 12. Using TotalStorage Productivity Center Performance Manager   503
After clicking Next, the Choose candidate locations panel opens. We selected Cluster from
              the Component Type drop-down list. See Figure 12-94.




              Figure 12-94 Choose candidate location

              The Component Type Cluster shows Component ID as 2105-F20-16603:2, Types as
              Cluster, Descriptor as 2, Available capacity as 308GB and Utilization as Low. This indicates
              that VPA plans to provision additional capacity on this Cluster 2 of ESS.




504   IBM TotalStorage Productivity Center V2.3: Getting Started
After clicking the Add button, Cluster 2 is a selected candidate for new volume. See
Figure 12-95.




Figure 12-95 Choose candidate location - select cluster




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   505
Upon clicking Next, the Verify settings panel opens as shown in Figure 12-96.




              Figure 12-96 Verify settings




506   IBM TotalStorage Productivity Center V2.3: Getting Started
After verifying settings and clicking Next, VPA recommendations window opens. See
Figure 12-97.




Figure 12-97 VPA recommendations




                  Chapter 12. Using TotalStorage Productivity Center Performance Manager   507
Since the purpose of this example is to show our readers the VPA looping only, we decided to
              un-check both check boxes for Implement Recommendations and Make additional
              recommendations. Clicking Finish completed the VPA example (Figure 12-98).




              Figure 12-98 Finish VPA panel


12.4.8 Creating and managing workload profiles
              The VPA makes decisions based on characteristics of the workload profile to decide volume
              placement recommendations. VPA decisions will not be accurate if an improper workload
              profile is chosen, and it may cause future performance issues for application.

              It is a must to have a valid and appropriate workload profile created prior to using VPA for any
              application. Therefore, creating and managing workload profile is an important task, which
              involves regular upkeep of workload profiles for each application disk I/O served by ESS.
              Figure 12-99 on page 509 shows a typical sequence of managing workload profiles.




508   IBM TotalStorage Productivity Center V2.3: Getting Started
D eterm in e I/O w o rk lo a d
                                                typ e o f ta rg e t                                  C re a te I/O p erfo rm a n c e
                                                 a p p lic a tio n                                     da ta c o lle c tio n ta s k
      M
      A          N o m atch w ith             C lo s e m a tc h w ith
      N       p re-d efined p ro file         pre d e fine d p ro file

      A
      G
                                         C C o o s e e P re -dfin e d d
                                           h h o s P re -d e e fin e                          In itia te I/O p e rfo rm a n c e d a ta
      I                                      o r r C retete lik e
                                                o C re a a lik e                              c o lle c tio n c ove rin g p e a k lo ad
                                                                                                tim e s an d g a th e r s u fficie n t
      N                                             p ro file
                                                  p ro file                                                   s a m p le s
      G

                                                                                                           S p e c ify tim e p e rio d o f
      P                                                                                                           p ea k a c tiv ity

      R                                       C h o o s e C reate
                                                    p ro file
      O
      F
      I
      L                                 V a lid ate an alysis res u lts               If res u lts n o t a c ce p tab le ,
                                                                                     re-v a lid a te d a ta c o lle c tio n
                                                                                               p a ra m e te rs
      E
      S                                                     R es u lts ac c ep ted




                                                S a ve P ro file



Figure 12-99 Typical sequence for managing workload profiles

Before using VPA for any additional disk space requirement for an application, you will need
to:
   determine typical I/O workload type of that application and;
   have performance data collected which covers peak load time periods

You will need to determine the broad category in selected I/O workload fits in, e.g. whether it
is OLTP high, OLTP standard, Data Warehouse, Batch sequential or Document archival. This
is shown as highlighted box in the diagram. The TotalStorage Productivity Center for Disk
provides pre-defined profiles for these workload types and it allows you to create additional
similar profiles by choosing Create like profiles.

If do not find any match with pre-defined profiles, then you may prefer to Create a new profile.
While choosing Create like or Create profiles, you will also need to specify historical
performance data samples covering peak load activity time period. Optionally, you may
specify additional I/O parameters.

Upon submitting the Create or Create like profile, the performance analysis will performed
and results will be displayed. Depending upon the outcome of the results, you may need to
re-validate the parameters for data collection task and ensure that peak load samples are
taken correctly. If the results are acceptable, you may Save the profile. This profile can be
referenced for future usage by VPA.

In “Choosing workload profiles” on page 510, we cover step-by-step tasks using an example.




                       Chapter 12. Using TotalStorage Productivity Center Performance Manager                                                509
Choosing workload profiles
              You can use Performance Manager to select a predefined workload profile or to create a new
              workload profile based on historical performance data or on an existing workload profile.

              Performance Manager uses these profiles to create a performance recommendation for
              volume allocation on an IBM storage server. You can also use a set of Performance Manager
              panels to create and manage the workload profiles. There are three methods you can use to
              choose a workload profile as shown in Figure 12-100.




              Figure 12-100 Choosing workload profiles


                Note: Using a predefined profile does not require pre-existing performance data, but the
                other two methods require historical performance data from the target storage device.

              You can launch the workload profiles management tool using the drag and drop method from
              the IBM Director console GUI interface. Drag the Manage Workload Profile task to the target
              storage device as shown in Figure 12-101.




              Figure 12-101 Launch Manage Workload Profile



510   IBM TotalStorage Productivity Center V2.3: Getting Started
If you are using Manage Workload Profile or VPA tool for first time of the selected ESS
device, then you will need to authorize ESS user validation. This has been described in detail
in “ESS User Validation” on page 485. The ESS User Validation is the same for VPA and
Manage Workload Profile tools.

After the successful ESS User validation, the Manage Workload Profile panel will be opened
as shown in Figure 12-102.




Figure 12-102 Manage workload profiles

You can create or manage a workload profile using the following three methods:
1. Selecting a predefined workload profile
   Several predefined workloads are shipped with Performance Manager. You can use the
   Choose workload profile panel to select the predefined workload profile that most closely
   matches your storage allocation needs. The default profiles shipped with Performance
   Manager are shown in Figure 12-103.




Figure 12-103 Default workload profiles

   You can select the properties panel of the respective pre-defined profile to verify the
   profile details. A sample profile for OLTP Standard is shown in Figure 12-83 on page 493.




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   511
2. Creating a workload profile similar to another profile
                  You can use the Create like panel to modify the details of a selected workload profile.You
                  can then save the changes and assign a new name to create a new workload profile from
                  the existing profile. To Create like a particular profile, these are the tasks involved:
                  a. Create a performance data collection task for target storage device: You may
                     need to include multiple storage devices based on your profile requirements for the
                     application.
                  b. Schedule data collection task: You may need to ensure that a data collection task
                     runs over a sufficient period of time, which truly represents a typical I/O load of the
                     respective application. The key is to have sufficient historical data.

                Tip: Best practice is to schedule frequency of a performance data collection task in such a
                way that it covers peak load periods of I/O activity and it has at least a few samples of peak
                loads. The number of samples depends on I/O characteristics of the application.

                  c. Determine the closest workload profile match: Determine whether new workload
                     profile matches w.r.t existing or pre-defined profiles. Note that it may not be the exact
                     fit, but should be of somewhat similar type.

                  d. Create the new similar profile: Using the Manage Workload Profile task, create new
                     profile. You will need to select appropriate time period for historical data, which you
                     have collected earlier.
                  In our example, we created similar profile using Batch Sequential pre-defined profile.
                  First, we select Batch Sequential profile and click Create like button as shown in
                  Figure 12-104.




              Figure 12-104 Manage workload profile - create like




512   IBM TotalStorage Productivity Center V2.3: Getting Started
The Properties panel for Batch Sequential is opened, as shown in Figure 12-105.




Figure 12-105 Properties for Batch sequential profile

We changed the following values for our new profile:
   Name: ITSO_Batch_Daily
   Description: For ITSO batch applications
   Average transfer size: 20KB
   Sequential reads: 65%
   Random reads: 10%
   Peak Activity information: We used time period as past 24 days from 12AM to 11PM.




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   513
We saved our new profile (see Figure 12-106).




              Figure 12-106 New Profile




514   IBM TotalStorage Productivity Center V2.3: Getting Started
This new profile - ITSO_Daily_batch is now available in Manage workload profile panel as
shown in Figure 12-107. This profile can now be used for VPA analysis. This completes our
example.




Figure 12-107 Manage profile panel with new profile

3. Creating a new workload profile from historical data

You can use the Manage workload profile panel to create a workload profile based on
historical data about existing volumes. You can select one or more volumes as the base for
the new workload profile. You can then assign a name to the workload profile, optionally
provide a description, and finally create the new profile.

To create a new workload profile, click the Create button as shown in Figure 12-108.




Figure 12-108 Create a new workload profile




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   515
This will launch a new panel for Creating workload profile as shown in Figure 12-109. At this
              stage, you will need to specify the volumes for performance data analysis. In our example, we
              selected all volumes. For selecting multiple volumes but not all, click the first volume, hold
              the Shift key and click the last volume in the list. After all the required volumes are selected
              (shown as dark blue), click the Add button. See Figure 12-109.

                Note: The ESS volumes you specify should be representative of I/O behavior of the
                application, for which you are planning to allocate space using the VPA tool.




              Figure 12-109 Create new profile and add volumes




516   IBM TotalStorage Productivity Center V2.3: Getting Started
Upon clicking the Add button, all the selected volumes will be moved to the selected volumes
box as shown in Figure 12-110.




Figure 12-110 Selected volumes and performance period for new workload profile

In the Peak activity information box, you will need to specify an activity sample period for
Volume performance analysis. You can select the option Use all available performance
data or select Use the specified peak activity period. Based on your application peak I/O
behavior, you may specify the sample period with Start date, Duration in days, and Start / End
time.

For the time setting, you can choose the drop-down box:
   Device time, or
   Client time, or
   Server time, or
   GMT




                     Chapter 12. Using TotalStorage Productivity Center Performance Manager   517
After you have entered all the fields, click Next. You will see the Create workload profile -
              Review panel as shown in Figure 12-111.




              Figure 12-111 Review new workload profile parameters

              You can specify a Name for the new workload profile and a Description. You may put in
              detailed description that covers:
                  The application name for which the profile is being created
                  What application I/O activity is represented by the peak activity sample
                  When it was created
                  Who created it (optional)
                  Any other relevant information your organization requires

              In our example, we created profile named New_ITSO_app1_profile. At this point you may click
              Finish.

              At this point, the TotalStorage Productivity Center for Disk will begin Volume performance
              analysis based on the parameters you have provided. This process may take some time
              depending upon number of volumes and sampling time period. Hence, be patient. Finally, it
              will show the outcome of the analysis.




518   IBM TotalStorage Productivity Center V2.3: Getting Started
In our example, we got the results notification message as shown in Figure 12-112. Analysis
yielded that results are not statistically significant, as shown message:

BWN005965E: Analysis results are not significant.

This may indicate that:
   There is not enough I/O activity on selected volumes, OR
   The time period chosen for sampling is not correct, OR
   Correct volumes were not chosen

You have an option to Save or Discard the profile. We decided to save the profile
(Figure 12-113).




Figure 12-112 Results for Create Profile

Upon saving the profile, it is now listed in the Manage workload profile panel as shown in
Figure 12-113.




Figure 12-113 Manage workload profile with new saved profile

   – The new profile can now be referenced by VPA for future usage.




                    Chapter 12. Using TotalStorage Productivity Center Performance Manager   519
520   IBM TotalStorage Productivity Center V2.3: Getting Started
13


   Chapter 13.   Using TotalStorage Productivity
                 Center for Data
                 This chapter introduces you to the TotalStorage Productivity Center for Data and discusses
                 the available functions.

                 The information in this chapter provides the information necessary to accomplish the
                 following tasks:
                     Discover and monitor storage assets enterprise-wide
                     Report on enterprise-wide assets, files and filesystems, databases, users, and
                     applications
                     Provide alerts (set by the user) on issues such as capacity problems, policy violations, etc.
                     Support chargebacks by usage or capacity




© Copyright IBM Corp. 2005. All rights reserved.                                                              521
13.1 TotalStorage Productivity Center for Data overview
              This section describes the business purpose of TotalStorage Productivity Center for Data
              (Data Manager), its architecture, components, and supported platforms.


13.1.1 Business purpose of TotalStorage Productivity Center for Data
              The primary business purpose of TotalStorage Productivity Center for Data is to help the
              storage administrator keep data available to applications so the company can produce
              revenue.

              Through monitoring and reporting, TotalStorage Productivity Center for Data helps the
              storage administrator prevent outages in the storage infrastructure. Armed with timely
              information, the storage administrator can take action to keep storage and data available to
              the application. TotalStorage Productivity Center for Data also helps to make the most
              efficient use of storage budgets by allowing administrators to use their existing storage more
              efficiently, and more accurately predict future storage growth.


13.1.2 Components of TotalStorage Productivity Center for Data
              At a high level, the major components of TotalStorage Productivity Center for Data are:
                  Server, running on a managing server, with access to a database repository
                  Agents, running on one or more Managed Devices
                  Clients (using either a locally installed GUI, or a browser-based Web GUI) which users
                  and administrators use to perform storage monitoring tasks.

              Data Manager Server
              The Data Manager Server:
                  Controls the discovery, reporting, and Alert functions
                  Stores all data in the central repository
                  Issues commands to Agents for jobs (either scheduled or ad hoc)
                  Receives requests from the user interface clients for information, and retrieves the
                  requested information from the central data repository.
                  Extends filesystems automatically
                  Reports on the IBM TotalStorage Enterprise Storage Server (ESS) and can also provide
                  LUN provisioning

              An RDBMS (either locally or remote) manages the repository of data collected from the
              Agents, and the reporting and monitoring capabilities defined by the users.

              WWW Server
              The Web Server is optional, and handles communications to allow remote Web access to the
              Server. The WWW Server can run on the same physical server as the Data Manager Server.

              Data Agent (on a Managed System)
              The Agent runs Probes and Scans, collects storage-related information from the managed
              system, and forwards it to the Manager to be stored in the database repository, and acted on
              if so defined. An Agent is required for every host system to be monitored, with the exception
              of NetWare and NAS devices.




522   IBM TotalStorage Productivity Center V2.3: Getting Started
Novell NetWare and NAS devices do not currently support locally installed Agents - they are
           managed through an Agent installed on a machine that uses (accesses) the NetWare or NAS
           device. The Agent will discover information on the volumes or filesystems that are accessible
           to the Agent’s host.

           The Agents are quite lightweight. Agents listen for commands from the host, and then
           perform a Probe (against the operating system), and/or a Scan (against selected
           filesystems). Normal operations might see one scheduled Scan per day or week, plus various
           ad hoc Scans. Scans and Probes are discussed later in this chapter.

           Clients (direct-connected and Web connected)
           Direct-connect Clients have the GUI to the Server installed locally. They communicate
           directly to the Manager to perform administration, monitoring, and reporting. The Manager
           retrieves information requested by the Clients from the database repository.

           Web-connect clients use the WWW Server to access the user interface through a Web
           browser. The Java administrative applet is downloaded to the Web Client machine and
           presents the same user interface that Direct-connect Clients see.


13.1.3 Security considerations
           TotalStorage Productivity Center for Data has two security levels: non-administrative users
           and administrators:
              Non-administrator users can:
              – View the data collected by TotalStorage Productivity Center for Data
              – Create, generate, and save reports
              Administrators can:
              – Create, modify, and schedule Pings, Probes, and Scans
              – Create, generate, and save reports
              – Perform administrative tasks and customize the TotalStorage Productivity Center for
                Data environment
              – Create Groups, Profiles, Quotas, and Constraints
              – Set Alerts


13.2 Functions of TotalStorage Productivity Center for Data
           An overview of the functions of TotalStorage Productivity Center for Data is provided in this
           section and explored in detail in the rest of the chapter. TotalStorage Productivity Center for
           Data is designed to be easy to use, quick to install, with flexible and powerful configuration.
           The main functions of the product are:
              Automatically discover and monitor disks, partitions, shared directories, and servers
              Reporting to track asset usage and availability
              – Physical inventory - disks, partitions, servers
              – Logical inventory - filesystems and files, databases and tables
              – Forecasting demand versus capacity
              – Standardized and customized reports, on-demand and batched
              – Various user-defined levels of grouping
              – From summary level down to individual file for userID granularity
              Alerts - execute scripts, e-mail, SNMP traps, event log
              Quotas
              Chargeback

                                            Chapter 13. Using TotalStorage Productivity Center for Data   523
13.2.1 Basic menu displays
              Figure 13-1 shows the main menu for TotalStorage Productivity Center for Data. You can see
              that the Agents configured show under the Agents entry. This display thus shows a quick
              summary of the state of each Agent. There are several icons to indicate the status of the
              Agents.
                  Green circle - Agent is communicating with the Server
                  Red crossed circle - Agent is down.
                  Red triangle - Agent on that system is not reachable
                  Red crossed square - Agent was connected, but currently there is an update for
                  TotalStorage Productivity Center for Data agent running.




              Figure 13-1 Agent summary




524   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-2 shows the TotalStorage Productivity Center for Data dashboard. This is the
default right-hand pane display when you start TotalStorage Productivity Center for Data and
shows a quick summary of the overall health of the storage environment. It can quickly show
you potential problem areas for further investigation.




Figure 13-2 TotalStorage Productivity Center for Data - dashboard

The dashboard contains four viewable areas, which cycle among seven pre-defined sets of
panels. To cycle, use the Cycle Panels button. Use the Refresh button to update the display.

Enterprise-wide summary
The Enterprise-wide Summary panel shows statistics accumulated from all the Agents. The
statistics are:
   Total filesystem capacity available
   Total filesystem capacity used
   Total filesystem free capacity
   Total allocated and unallocated disk space
   Total disk space unallocated to filesystems
   Total LUN capacity
   Total usable LUN capacity
   Total number of monitored servers
   Total number of unmonitored servers
   Total number of storage subsystems
   Total number of users
   Total number of disks
   Total number of LUNs
   Total number of filesystems
   Total number of directories
   Total number of files




                                  Chapter 13. Using TotalStorage Productivity Center for Data   525
Filesystem Used Space
              This panel displays a pie chart showing the distribution of used and free space in all
              filesystems. Different chart types can be selected here. This provides a quick snapshot of
              your filesystem space utilization efficiency.

              Users Consuming the Most Space
              By default this panel displays a bar chart (different chart types can be selected) of the users
              who are using the largest amount of filesystem space.

              Monitored Server Summary
              This panel shows a table of total disk filesystem capacity for the monitored servers sorted by
              OS type.

              Filesystems with Least Free Space Percentage
              This panel shows a table of the most full filesystems, including the percent of space free, the
              total filesystem capacity, and the filesystem mount point.

              Users Consuming the Most Space Report
              This panel shows the same information as the Users Consuming the Most Space panel, but
              in a table format.

              Alerts Pending
              This panel shows active Alerts that have been triggered but are still pending.


13.2.2 Discover and monitor Agents, disks, filesystems, and databases
              TotalStorage Productivity Center for Data uses three methods to discover information about
              the assets in the storage environment: Pings, Probes, and Scans. These are typically set up
              to run automatically as scheduled tasks. You can define different Ping, Probe, and Scan jobs
              to run against different Agents or groups of Agents (for example, to run a regular Probe of all
              Windows systems) according to your particular requirements.

              Pings
              A Ping is a standard ICMP Ping which checks registered Agents for availability. If an Agent
              does not respond to a Ping (or a pre-defined number of Pings) you can set up an Alert to take
              some action. The actions could be one, any, or all of:
                  SNMP trap
                  TEC Event
                  Notification at login
                  Entry in the Windows event log
                  Run a script
                  Send e-mail to a specified user(s)




526   IBM TotalStorage Productivity Center V2.3: Getting Started
Pings are used to generate Availability Reports, which list the percentage of times a computer
has responded to the Ping. An example of an Availability Report for Ping is shown in
Figure 13-3. Availability Reports are discussed in detail in 13.11.3, “Availability Reporting” on
page 604.




Figure 13-3 Availability Report - Ping


Probes
Probes are used to gather information about the assets and system resources of monitored
servers, such as processor count and speed, memory size, disk count and size, filesystems,
etc. The data collected by the Probe process is used in the Asset Reports described in
13.11.1, “Asset Reporting” on page 595.

Figure 13-4 shows an Asset report for detected disks.




Figure 13-4 Asset Report of discovered disks




                                   Chapter 13. Using TotalStorage Productivity Center for Data   527
Figure 13-5 shows an Asset Report for detected database tablespaces.




              Figure 13-5 Asset Report of database tablespaces


              Scans
              The Scan process is used to gather statistics about usage and trends of the server storage.
              Data collected by the Scan jobs are tailored by Profiles. Results of Scan jobs are stored in
              the enterprise repository. This data supplies the data for the Capacity, Usage, Usage
              Violations, and Backup Reporting functions. These reports can be scheduled to run regularly,
              or they can be run ad hoc by the administrator.

              Profiles limit the scanning according to the parameters specified in the Profile. Profiles are
              used in Scan jobs to specify what file patterns will be scanned, what attributes will be
              gathered, what summary view will be available in reports and the retention period for the
              statistics. TotalStorage Productivity Center for Data supplies a number of default Profiles
              which can be used, or additional Profiles can be defined. Table 13-1 on page 547 shows the
              default Profiles provided. Some of these include:
                  Largest files - Gathers statistics on the largest files
                  Largest directories - Gathers statistics on the largest directories
                  Most at risk - Gathers statistics on the files that have been modified the longest time ago
                  and have not been backed up since modified (Windows Agents only)




528   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-6 shows a sample of a report produced from data collected in Scans.




          Figure 13-6 Summary View - by filesystem, disk space used and disk space free

          This report shows a list of the filesystems on each Agent, the amount of space used in each,
          expressed in bytes and as a percentage, the amount of free space, and the total capacity
          available in the filesystem.


13.2.3 Reporting
          Reporting in TotalStorage Productivity Center for Data is very powerful, with over 300
          pre-defined views, and the capability to customize those standard views, save the custom
          report, and add it to your menu for scheduled or ad hoc reports. You can also create your own
          individual reports according to particular needs and set them to run as needed, or in batch
          (regularly). Reports can be produced in table format for a variety of charting (graph) views.
          You can export reports to CSV or HTML formats for external usage.

          Reports are generated against data already in the repository. A common practice is to
          schedule Scans and Probes just before running reports.

          Reporting can be done at almost any level in the system, from the enterprise down to a
          specific entity and any level in between. Figure 13-6 shows a high-level summary report. Or,
          you can drill down to something very specific. Figure 13-7 is an example of a lower-level
          report, where the administrator has focused on a particular Agent, KANAGA, to look at a
          particular disk on a particular controller.




                                           Chapter 13. Using TotalStorage Productivity Center for Data   529
Figure 13-7 Asset Report - KANAGA assets

              Reports can be produced either system-wide or grouped into views, such as by computer, or
              OS type.


                Restriction: Currently, there is a maximum of 32,767 (216 -1) rows per report. Therefore,
                you cannot produce a report to list all the .HTM files in a directory containing a million files.
                However, you can (and it would be more productive to do so) produce a report of the 20
                largest files in the directory, or the 20 oldest files, for example.


              TotalStorage Productivity Center for Data allows you to group information about similar
              entities (disk, filesystems, etc.) from different servers or business units into a summary report,
              so that business and technology administrators can manage an enterprise infrastructure. Or,
              you can summarize information from a specific server - the flexibility and choice of
              configuration is entirely up to the administrator.

              You can report as at a point in time, or produce a historical report, showing storage growth
              trends over time. Reporting lets you track actual demand for disk over time, and then use this
              information to forecast future demand for the next quarter, two quarters, year, etc. Figure 13-8
              is an example of a historical report, showing a graph of the number of files on the C drive on
              the Agent KANAGA.




530   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-8 Historical report of filesystem utilization

TotalStorage Productivity Center for Data has three basic types of reports:
   Computers and filesystems
   Databases
   Chargeback

Reporting categories
Major reporting categories for filesystems and databases are:
   Asset Reporting uses the data collected Probes to build a hardware inventory of the
   storage assets. You can then navigate through a hierarchical view of the assets by drilling
   down through computers, controllers, disks, filesystems, directories, and exports. For
   database reporting, information on instances, databases, tables, and data files is
   presented for reporting.
   Storage Subsystems Reporting provides information showing storage capacity at a
   computer, filesystem, storage subsystem, LUN, and disk level. These reports also enable
   you to view the relationships among the components of a storage subsystem. For a list of
   supported devices, see <table>.
   Availability Reporting shows responses to Ping jobs, as well as computer uptime.
   Capacity Reporting shows how much storage capacity is installed, how much of the
   installed capacity is being used, and how much is available for future growth. Reporting is
   done by disk and filesystem, and for databases, by database.
   Usage Reporting shows the usage and growth of storage consumption, grouped by
   filesystem, and computers, individual users, or enterprise-wide.
   Usage Violation Reporting shows violations to the corporate storage usage policies, as
   defined through TotalStorage Productivity Center for Data. Violations are either of Quota
   (defining how much storage a user or group of users is allowed) or Constraint (defining
   which file types, owners and file sizes are allowed on a computer or storage entity). You
   can define what action should be taken when a violation is detected - for example, SNMP
   trap, e-mail, or running a user-written script.
   Backup Reporting identifies files which are at risk because they have not been backed up.

                                     Chapter 13. Using TotalStorage Productivity Center for Data   531
Reporting on the Web
              It is easy to customize Tivoli Storage Resource Manager to set up a reports Web site, so that
              anyone in the organization can view selected reports through their browser. 13.16, “Setting up
              a reports Web site” on page 698 explains how to do this. Figure 13-9 <change> shows an
              example of a simple Web site to view TotalStorage Productivity Center for Data reports.




              Figure 13-9 TotalStorage Productivity Center for Data Reports on the Web


13.2.4 Alerts
              An Alert defines an action to be performed if a particular event occurs or condition is found.
              Alerts can be set on physical objects (computers and disks) or a logical objects (filesystems,
              directories, users, databases, and OS user groups). Alerts can tell you, for instance, if a disk
              has a lot of recent defects, or if a filesystem or database is approaching capacity.

              Alerts on computers and disks come from the output of Probe jobs and are generated for
              each object that meets the triggering condition. If you have specified a triggered action
              (running a script, sending an e-mail, etc.) then that action will be performed if the condition is
              met.

              Alerts on filesystems, directories, users, and OS user groups come from the combined output
              of a Probe and a Scan. Again, if you have specified an action, that action will be performed if
              the condition is met.

              An Alert will register in the Alert log, plus you can also define one, some, or all of the following
              actions to be performed in addition:
                  Send an e-mail indicating the nature of the Alert.
                  Run a specific script with relevant parameters supplied from the content of the Alert.
                  Make an entry into the Windows event log.
                  Pop up next time the user logs in to TotalStorage Productivity Center for Data.
                  Send an SNMP trap.
                  Log a TEC event

              Refer to 13.4, “OS Alerts” on page 555 for details on alerts.

532   IBM TotalStorage Productivity Center V2.3: Getting Started
13.2.5 Chargeback: Charging for storage usage
           TotalStorage Productivity Center for Data provides the ability to produce Chargeback
           information for storage usage. The following items can have charges allocated against them:
              Operating system storage by user
              Operating system disk capacity by computer
              Storage usage by database user
              Total size by database tablespace

           TotalStorage Productivity Center for Data can directly produce an invoice or create a file in
           CIMS format. CIMS is a set of resource accounting tools that allow you to track, manage,
           allocate, and charge for IT resources and costs. For more information on CIMS see the Web
           site:
              http://www.cims.com

           Chargeback is a very powerful tool for raising the awareness within the organization of the
           cost of storage, and the need to have the appropriate tools and processes in place to manage
           storage effectively and efficiently.

           Refer to 13.17, “Charging for storage usage” on page 700 for more details on Chargebacks.



13.3 OS Monitoring
           The Monitoring features of TotalStorage Productivity Center for Data enable you to run
           regularly scheduled or ad hoc data collection jobs. These jobs gather statistics about the
           storage assets and their availability and their usage within your enterprise, and make the
           collected data available for reporting.

           This section gives a quick overview of the monitoring jobs, and explains how they work
           through practical examples.

           Reporting on the collected data is explained in “Data Manager reporting capabilities” on
           page 592.




                                           Chapter 13. Using TotalStorage Productivity Center for Data   533
13.3.1 Navigation tree
              Figure 13-10 shows the complete navigation tree for OS Monitoring which includes Groups,
              Discovery, Pings, Probes, Scans, and Profiles.




              Figure 13-10 OS Monitoring tree

              Except for Discovery, you can create multiple definitions for each of those monitoring features
              of TotalStorage Productivity Center for Data. To create a new definition, right-click the feature
              and select Create <feature>. Figure 13-11 shows how to create a new Scan job.




              Figure 13-11 Create Scan job creation




534   IBM TotalStorage Productivity Center V2.3: Getting Started
Once saved, any definition within TotalStorage Productivity Center for Data can be updated
          by clicking the object. This will put you in Edit mode. Save your changes by clicking the floppy
          disk icon in the top menu bar.

          Discovery, Pings, Probes, and Scan menus contain jobs that can run on a scheduled basis or
          ad hoc. To execute a job immediately, right-click the job then select Run Now (see
          Figure 13-12). Each execution of a job creates a time-stamped output that can be displayed
          by expanding the tree under the job (you may need to right-click the job and select Refresh
          Job List).




          Figure 13-12 OS Monitoring - Jobs list

          The color of the job output represents the job status:
             Green - Successful run
             Brown - Warnings occurred during the run
             Red - Errors occurred during the run
             Blue - Running jobs

          To view the output of a job, double click the job.

          Groups and Profiles are definitions that may be used by other jobs - they do not produce an
          output in themselves.

          As shown in Figure 13-12, all objects created within Data Manager are prefixed with the user
          ID of the creator. Default definitions, created during product installation, are prefixed with
          TPCUser.Default.

          Groups, Discovery, Probes, Scans, and Profiles are explained in the following sections.


13.3.2 Groups
          Before defining monitoring and management jobs, it may be useful to group your resources
          so you can limit the scope of monitoring or data collection.


                                            Chapter 13. Using TotalStorage Productivity Center for Data   535
Computer Groups
              Computer Groups allow you to target management jobs on specific computers based on your
              own criteria. Some criteria you might consider for grouping computers are platform type,
              application type, database type, and environment type (for example, test or production).

              Our lab environment contains Windows 2000 servers. In order to target specific servers for
              monitoring based on OS and/or database type, we will defined the following groups:
                  Windows Systems
                  Windows DB Systems

              To create the first group, expand Data Manager → Monitoring → Groups → Computer,
              right-click Computer and select Create Computer Group. Our first group will contain all
              Windows systems as shown in Figure 13-13. To add or remove a host from the group,
              highlight it in either the Available or Current Selections panel and use the arrow buttons. You
              can also enter a meaningful description in the field.




              Figure 13-13 Computer Group definition

              To save the new Group, click the floppy disk icon in the menu bar, and enter the Group name
              in the confirmation box shown in Figure 13-14.




              Figure 13-14 Save a new Computer Group




536   IBM TotalStorage Productivity Center V2.3: Getting Started
We created the other group using the same process, and named it Windows DB Systems.


 Important: To avoid redundant data collection, a computer can belong to only one Group
 at a time. If you add a system which is already in a Group, to a second Group, it will
 automatically be removed from the first Group.


Figure 13-15 shows the final Group configuration, with the members of the Windows Systems
group.




Figure 13-15 Final Computers Group definitions


 Note: The default group TPCUser.Default Computer Group contains all servers that have
 been discovered, but not yet assigned to a Group.


Filesystem Groups
Filesystem Groups are used to associate together filesystems from different computers that
have some commonality. You can then use this group definition to focus the Scan and the
Alert processes to those filesystems.

To create a Filesystem Group, you have to select explicitly each filesystem for each computer
you want to include in the group. There is no way to do a grouped selection, e.g. / (root)
filesystem for all UNIX servers or C: for all Windows platforms.

 Note: As for computers, a filesystem can belong to only one Group.




                                 Chapter 13. Using TotalStorage Productivity Center for Data   537
Directory Groups
              Use Directory Groups to group together directories to which you want to apply the same
              storage management rules.

              Figure 13-16 shows the Directory Group definition screen by going to Data Manager →
              Monitoring → Groups → Directory and right-clicking Directory and selecting Create
              Directory Group.




              Figure 13-16 Directory group definition

              The Directory Group definition has two views for directory selection:
                  Use directories by computer to specify several directories for one computer.
                  Use computers by directory to specify one directory for several computers.

              The button on the bottom of the screen toggles between New computer and New directory
              depending on the view you select.




538   IBM TotalStorage Productivity Center V2.3: Getting Started
We will define one Directory Group with a DB2 directory for a specific computer (Colorado).
To define the Group:
1. Select directories by computer.
2. Click New computer.
3. Select colorado from the pull-down Computer field.
4. Enter C:DB2NODE0000 in the Directories field and click Add (see Figure 13-17).




Figure 13-17 Directories for computer configuration

5. Click OK.
6. Save the group as DB2 Node.

Figure 13-18 shows our final Groups configuration and details of the OracleArchive Group.




Figure 13-18 Final Directories Group definition




                                  Chapter 13. Using TotalStorage Productivity Center for Data   539
User Groups
              You can define Groups made up of selected user IDs. These groupings will enable you to
              easily define and focus storage management rules such as scanning and Constraints on the
              defined IDs.

                Note: You can include in a User Group only user IDs defined on the discovered hosts,
                which have files belonging to them.



                Note: As with computers, a user can be defined in only one Group.



              OS User Group Groups
              You can define Groups consisting of operating system user groups such as Administrators
              for Windows or adm for UNIX. To define a Group consisting of user groups, select OS User
              Group from the Groups entry on the left hand panel.

                Note: As for users, an OS User Group will be added to the list of available Groups only
                when a Scan job finds at least one file owned by a user belonging to that Group.



                Note: As with users, an OS User Group can belong to only one Group at a time.


13.3.3 Discovery
              The Discovery process is used to discover new computers within your enterprise that have
              not yet been monitored by Data Manager. The discovery process will:
                  Request a list of Windows systems from the Windows Domain Controller
                  Contact, through SNMP, all NAS filers and check if they are registered in the nas.config
                  file
                  Discover all NetWare servers in the NetWare trees reported by Agents
                  Search UNIX Agents’ mount tables, looking for remote filesystems and discover NAS filers

              More details of NAS and NetWare discovery are given in the manual IBM Tivoli Storage
              Resource Manager: A Practical Introduction, SG24-6886.

              Use the path Data Manager → Monitoring → Discovery to change the settings of the
              Discovery job. The following options are available.

              When to run tab
              The initial tab, When to Run (Figure 13-19) is used to modify the scheduling settings. You can
              specify to execute the Discovery:
                  Now - Run once when the job is saved.
                  Once - at a specified time in the future
                  Repeatedly - Choose the frequency in minutes, hours, days, weeks, or months. You can
                  limit the run to specific days of the week.




540   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-19 Discovery When to Run options


Alert tab
The second tab, Alert, enables you to be notified when a new computer is discovered. See
13.4, “OS Alerts” on page 555 for more details on the Alerting process.

Options tab
The third tab, Options (Figure 13-20) sets the discovery runtime properties.




Figure 13-20 Discovery job options


                                 Chapter 13. Using TotalStorage Productivity Center for Data   541
Uncheck the Skip Workstations field if you want to discover the Windows workstations
              reported by the Windows Domain Controller.


13.3.4 Pings
              The Ping process will:
                  launch TCP/IP pings against monitored computers
                  generate statistics on computer Availability in the central repository
                  generate an Alert if the process fails because of an unavailable host
                  summarizes the Ping process.

              Pings gather statistics about the availability of monitored servers. The scheduled job will Ping
              your servers and consider them active if it gets an answer. This is purely ICMP-protocol
              based - there is no measurement of individual application availability. When you create a new
              Ping job, you can set the following options.

              Computers tab
              Figure 13-21 shows the Computers tab, which is used to limit the scope of the computers that
              are to be Pinged.




              Figure 13-21 Ping job configuration - Computers




542   IBM TotalStorage Productivity Center V2.3: Getting Started
When to Ping tab
The tab, When to PING, sets the frequency used for checking. We selected a frequency of 10
minutes as shown in Figure 13-22 on page 543.




Figure 13-22 Ping job configuration - When to Ping


Options tab
On the Options tab, you specify how often the Ping statistics are saved in the database
repository. By default, TotalStorage Productivity Center for Data keeps its Ping statistics in
memory for eight Pings before flushing them to the database and calculating an average
availability. You can change the flushing interval to another time amount, or a number of
Pings (for example, to calculate availability after every 10 Pings). The system availability is
calculated as:
   (Count of successful pings) / (Count of pings)

A lower interval can increase database size, but gives you more accuracy on the availability
history.

We selected to save to the database at each Ping (), which means we will have an availability
of 100% or of 0%, but we have a more granular view of the availability of our servers
(Figure 13-23).




                                  Chapter 13. Using TotalStorage Productivity Center for Data   543
Figure 13-23 Ping job configuration - Options


              Alert tab
              The Alert tab (shown in Figure 13-24) is used to generate Alerts for each host that is
              unavailable. Alert mechanisms are explained in more detail in 13.4, “OS Alerts” on page 555.

              You can choose any Alert type from the following:
                  SNMP trap to send a trap to the Event manager defined in Administrative services →
                  Configuration →General → Alert Disposition
                  TEC Event to send an event to a Tivoli Enterprise Console
                  Login Notification to direct the Alert to the specified user in the Alert Log (see 13.4, “OS
                  Alerts” on page 555)
                  Windows Event Log to generate an event to the Windows event log
                  Run Script to run a script on the specified server
                  Email to send a mail to the specified user through the Mail server defined in
                  Administrative services → Configuration → General → Alert Disposition




544   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-24 Ping job configuration - Alert

          We selected to run a script that will send popup messages to selected administrators. The
          script is listed in Example 13-1. Optimally, you would send an event to a central console such
          as the Tivoli Enterprise Console. Note that certain parameters are passed to the script - more
          information is given in “Alerts tab” on page 560.

          Example 13-1 Script PINGFAILED.BAT
          net send /DOMAIN:Colorado Computer %1 did not respond to last %2 ping(s). Please check it


          We then saved the Ping job as PingHosts, and tested it by right-clicking and selecting Run
          now. As the hosts did not respond, we received notifications as shown in Figure 13-25.




          Figure 13-25 Ping failed popup for GALLIUM

          More details about the related reporting features of TotalStorage Productivity Center for
          DataTotalStorage Productivity Center for Data are in 13.11.3, “Availability Reporting” on
          page 604.


13.3.5 Probes
          The Probe process will:
             Gather Assets data on monitored computers
             Store data in the central repository
             Generate an Alert if the process fails


                                             Chapter 13. Using TotalStorage Productivity Center for Data   545
The Probe process gathers data about the assets and system resources of Agents such as:
                  Memory size
                  Processor count and speed
                  Hard disks
                  Filesystems

              The data collected by the Probe process is used by the Asset Reports described in 13.11.1,
              “Asset Reporting” on page 595.

              Computers tab
              Figure 13-26 shows that we included the TPCUser.Default Computer Group in the Probe so
              that all computers, including those not yet assigned to an existing Group, will be Probed. We
              saved the Probe as ProbeHosts.




              Figure 13-26 New Probe configuration


                Important: Only the filesystems that have been returned by a Probe job will be available
                for further use by Scan, Alerts, and policy management within TotalStorage Productivity
                Center for Data.


              When to Probe tab
              This tab has the same configuration as for the Ping process.

              We set up a weekly Probe to run on Sunday for all computers. We recommend running the
              Probe job at a time where all the production data you want to monitor is available to the
              system.

              Alert tab
              As this is not a business-critical process, we asked to be alerted by mail for any failed Probe.
              Figure 13-27 shows the default mail text configuration for a Probe failure.




546   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-27 Probe alert - mail configuration


13.3.6 Profiles
           Profiles are used:
              In Scan jobs
              To limit files to be scanned
              To specify files tabulates to be scanned
              To select the summary view
              Directories and filesystems
              User ids
              OS user groups
              To set statistics retention period

           TotalStorage Productivity Center for Disk provides default profiles that provide data for all the
           default reports.

           Profiles are used in Scan jobs to specify:
              The pattern of files to be scanned
              The attributes of files to be gathered
              The summary view that will be available in reports
              The statistics retention period

           Specifying correct profiles avoids gathering unnecessary information that may lead to space
           problems within the repository. However, you will not be able to report on or check Quotas on
           files that are not used by the Profile.

           Data Manager comes with several default profiles, (shown in Table 13-1) prefixed with
           TPCUser, which can be reused in any Scan jobs you define.
           Table 13-1 Default profile

            Default profile name              Description

            BY_ACCESS                         Gathers statistics by length of time since last access of files

            BY_CREATION                       Gathers statistics by length of time since creation of files




                                              Chapter 13. Using TotalStorage Productivity Center for Data       547
Default profile name            Description

                BY_MOD_NOT_BACKED_UP            Gathers statistics by length of time since last modification (only for
                                                files not backed up since modification). Windows only

                BY_MODIFICATION                 Gathers statistics by length of time since last modification of files

                FILE_SIZE_DISTRIBUTION          Gathers file size distribution statistics

                LARGEST_DIRECTORIES             Gathers statistics on the n largest directories. (20 is the default
                                                amount.)

                LARGEST_FILES                   Gathers statistics on the n largest files. (20 is the default amount.)

                LARGEST_ORPHANS                 Gathers statistics on the n largest orphan files. (20 is the default
                                                amount.)

                MOST_AT_RISK                    Gathers statistics on the n files that have been modified the longest
                                                time ago and have not yet been backed up since they were
                                                modified. Windows only. (20 is the default amount.)

                OLDEST_ORPHANS                  Gathers statistics on the n oldest orphan files. (20 is the default
                                                amount.)

                MOST_OBSOLETE_FILES             Gathers statistics on the n “most obsolete” files (i.e., files that have
                                                not been accessed or modified for the longest period of time). (20
                                                is the default amount.)

                SUMMARY_BY_FILE_TYPE            Summarizes space usage by file extension

                SUMMARY_BY_FILESYSTEM           Summarizes space usage by Filesystem or Directory
                /DIRECTORY

                SUMMARY_BY_GROUP                Summarizes space usage by OS Group

                SUMMARY_BY_OWNER                Summarizes space usage by Owner

                TEMPORARY_FILES                 Gathers statistics on network-wide space consumed by temporary
                                                files

                WASTED_SPACE                    Gathers statistics on non-OS files not accessed in the last year and
                                                orphaned files

              Those default profiles, when set in a Scan job, gather data needed for all the default Data
              Manager reports.

              As an example, we will define an additional Profile to limit a Scan job to the 500 largest
              Postscript or PDF files unused in the last six months. We also want to keep weekly statistics
              at a filesystem and directory level for two weeks.

              Statistics tab
              On the Statistics tab (shown in Figure 13-28), we specified:
                  Retain filesystem summary for two weeks
                  Gather data based on creation data
                  Select the 500 largest files

              The Statistics tab is used to specify the type of data that is gathered, and has a direct
              impact on the type of reports that will be available. In our specific case, the Scan associated
              with this profile will not create data for reports based on user IDs and users groups. Neither
              will it create data for reports on directory size.



548   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-28 New Profile - Statistics tab

The Summarize space usage by section of the Statistics tab specifies how the space usage
data must be summarized. If no summary level is checked, the data will not be summarized,
and therefore will not be available for reporting in the corresponding level of Usage Reporting
section of TotalStorage Productivity Center for Data.

In our particular case, because we select to summarize by filesystem and directory, we will
see space used by PDF and Postscript files at those levels, providing we set up the Scan
profile correctly. See 13.3.7, “Scans” on page 552 for information on this. We will not see
which users or groups have allocated those PDF and Postscript files.

 Restriction: For Windows servers, users and groups statistics will not be created for FAT
 filesystems.

The Accumulate history section sets the retention period of the collected data. In this case,
we will see a weekly summary for the last two weeks.

The Gather statistics by length of time since section sets the base date used to calculate the
file load. It determines if data will be gathered and summarized for the Data Manager →
Reporting → Usage → Files reporting view.

The Gather information on the section sets the amount of files to retrieve for each of the
report views available under Data Manager → Reporting → Usage → Access Load.

Files filter tab
The Files filter tab is used to limit the scope of files that are returned by the Scan job. To
create a selection, right-click the All files selected context-menu option as shown in
Figure 13-29.




                                   Chapter 13. Using TotalStorage Productivity Center for Data   549
Figure 13-29 New Profile - File filter

              With the New Condition menu, you can create a single filter on the files while the New
              Group enables you to combine several conditions with:
              All of                    The file is selected if all conditions are met (AND)
              Any of                    The file is selected if at least one condition is met (OR)
              None of                   The file is NOT selected if at least one condition is met (NOT OR)
              Not all of                The file is selected if none of the conditions are met (NOT AND)

              The Condition Group can contain individual conditions or other condition groups.

              Each individual condition will filter files based on one of the listed items:
                  Name
                  Last access time
                  Last modified
                  Creation time
                  Owner user ID
                  Owner group
                  Windows files attributes
                  Size
                  Type
                  Length

              We want to select files that meet our conditions: (name is *.ps or name is *.pdf) and
              unused since six months. The AND between our two conditions will be translated to All of,
              while the OR within our first condition will be translated to Any of.

              On the screen shown in Figure 13-29, we selected New Group. From the popup screen,
              Figure 13-30, we selected All of and clicked OK.




550   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-30 New Condition Group

Now, within our All of group we will create one dependant Any of group using the same
sequence. The result is shown in Figure 13-31.




Figure 13-31 New Profile - Conditions Groups

Now, we create individual conditions within each group by right-clicking New Condition on
the group where the conditions must be created. Figure 13-32 shows the creation of our first
condition for the Any of group. We enter in our file specifications (*.ps and *.pdf) here.




Figure 13-32 New Profile - New condition




                                 Chapter 13. Using TotalStorage Productivity Center for Data   551
We repeated the operation for the second condition (All of). The final result is shown in
              Figure 13-33.




              Figure 13-33 New Profile - Conditions

              The bottom of the right pane shows the textual form of the created condition. You can see that
              it corresponds to our initial condition. We saved the profile as PS_PDF_FILES (Figure 13-34).




              Figure 13-34 Profile save


13.3.7 Scans
              The Scan process is used to gather data about files and to summarize Usage statistics as
              specified in the associated profiles. It is mandatory for Quotas and Constraints management.
              The Scan process gathers statistics about the usage and trends of the server storage. Scan
              job results are stored in the repository and supply the data necessary for the Capacity,
              Usage, Usage Violations, and Backup Reporting facilities. To create a new Scan job, Data
              Manager → Monitoring → Scans, right-click and select Create Scan. The scope of each
              Scan job is set by five different tabs on the right pane.

              Filesystems tab
              You can specify a specific filesystem for one computer, a filesystem Group (see “Filesystem
              Groups” on page 537) or all filesystems for a specific computer. Only the filesystems you
              have selected will be scanned. Figure 13-35 shows how to configure the Scan to gather data
              on all our servers.

                Note: Only filesystems found by the Probe process will be available for Scan.


552   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-35 New Scan configuration - Filesystem tab


Directory Groups tab
Use this tab to extend the scope of the Scan and also summarize data for the selected
directories. Only directories in the previously selected filesystems will be scanned.

Profiles tab
As explained in 13.3.6, “Profiles” on page 547, the Profiles are used to select the files that are
scanned for information gathering. A Scan job scans and gathers data only for files that are
scoped by selected Profiles. You can specify Profiles at two levels:
   Filesystems: All selected filesystems will be scanned and data summarized for each
   filesystem.
   Directory: All selected directories (if included in the filesystem) will be scanned and data
   summarized for each directory.




                                 Chapter 13. Using TotalStorage Productivity Center for Data   553
Figure 13-36 shows how to configure a Scan to have data summarized at both the filesystem
              and directory level.

              .




              Figure 13-36 New Scan configuration - Profiles tab


              When to SCAN tab
              As with the Probe and Ping jobs, the scheduling of the job is specified on the When to Scan
              tab.

              Alert tab
              You can be alerted through mail, script, Windows Event Log, SNMP trap, TEC event, or Login
              notification if the Scan job fails. The Scan job may fail if an Agent is unreachable.

              Click the floppy icon to save your new Scan job, shown in Figure 13-37.




              Figure 13-37 New Scan - Save




554   IBM TotalStorage Productivity Center V2.3: Getting Started
Putting it all together
                   Table 13-2 summarizes the reports views for filesystems and directories that will be available
                   depending on the settings of the Profiles and the Scan jobs. We assume the Profiles have
                   been defined with the Summarize space by Filesystem/Directory option. Note that in order
                   to get reports by filesystem or directory, you need to select either or both in the Scan Profile.

Table 13-2 Profiles/Scans versus Reports
 Scan Jobs settings                                                                Available reports

 Filesystem    Directory       Filesystem    Directory    What is scanned          By Filesystem       By Directory
 /Computer                     profile       profile                               Reports             Reports

 x             -               -             -            FS                       -                   -

 x             x               -             -            FS                       -                   -
                                                          Dir if in specified FS

 x             x               x             -            FS                       x                   -
                                                          Dir if in specified FS

 x             x               x             x            FS                       x                   x
                                                          Dir if in specified FS

 x             x                             x            FS                                           x
                                                          Dir scanned if in
                                                          specified FS

 x             -               x             x            FS                       x                   -

 x             -               -             x            FS                       -                   -




13.4 OS Alerts
                   TotalStorage Productivity Center for Data enables you to define Alerts on computers,
                   filesystems, and directories. Once the Alerts are defined, it will monitor the results of the
                   Probe and Scan jobs, and will trigger an Alert when the threshold or the condition is met.

                   TotalStorage Productivity Center for Data provides a number options for Alert mechanisms
                   from which you can choose depending on the severity you assign to the Alert.

                   Depending on the severity of the triggered event or the functions available in your
                   environment, you may want to be alerted with:




                                                    Chapter 13. Using TotalStorage Productivity Center for Data       555
An SNMP trap to an event manager. Figure 13-38 shows a Filesystem space low Alert as
                  displayed in our SNMP application, IBM Tivoli NetView. Defining the event manager is
                  explained in 8.5, “Alert Disposition” on page 316.




              Figure 13-38 Alert - SNMP trap sample

                  A TEC (Tivoli Enterprise Console) event.
                  An entry in the Alert Log (see Figure 13-39). You can configure Data Manager, so that the
                  Alert Log will be automatically displayed when you log on to the GUI by using Preferences
                  → Edit General (see Figure 13-40).




              Figure 13-39 Alert - Logged alerts sample




556   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-40 Alert - Preferences

   An entry in the Windows Event log, as shown in Figure 13-41. This is useful for lower
   severity alerts or when you are monitoring your Windows event logs with an automated
   tool such as IBM Tivoli Distributed Monitoring.




Figure 13-41 Alerts - Windows Event Viewer sample

   Running a specified script - The script runs on the specified computer with the authority of
   the Agent (root or Administrator). See 13.5.5, “Scheduled Actions” on page 582 for special
   considerations with scripts execution.
   An e-mail - TotalStorage Productivity Center for Data must be configured with a valid
   SMTP server and port as explained in 8.5, “Alert Disposition” on page 316.




                                   Chapter 13. Using TotalStorage Productivity Center for Data   557
13.4.1 Alerting navigation tree
              Figure 13-42 shows the complete navigation tree for OS Alerting which includes Computer
              Alerts, Filesystem Alerts, Directory Alerts, and Alert Log.




              Figure 13-42 OS Alerting tree




558   IBM TotalStorage Productivity Center V2.3: Getting Started
Except for the Alert Log, you can create multiple definitions for each of those Alert features of
TotalStorage Productivity Center for Data. To create a new definition, right-click the feature
and select Create <feature>. Figure 13-43 shows how to create a new Filesystem Alert.




Figure 13-43 Filesystem alert creation




                                  Chapter 13. Using TotalStorage Productivity Center for Data   559
13.4.2 Computer Alerts
              Computer Alerts act on the output of Probe jobs (see 13.3.5, “Probes” on page 545) and
              generate an Alert for each computer that meets the triggering condition. Figure 13-44 shows
              the configuration screen for a Computer Alert.




              Figure 13-44 Computer alerts - Alerts


              Alerts tab
              The Alerts tab contains two parts:
                  Triggering condition to specify the computer component you want to be monitored. You
                  can monitor a computer for:
                  –   RAM increased
                  –   RAM decreased
                  –   Virtual Memory increased
                  –   Virtual Memory decreased
                  –   New disk detected
                  –   Disk not found
                  –   New disk defect found
                  –   Total disk defects exceed. You will have to specify a threshold.
                  –   Disk failure predicted
                  –   New filesystem detected
                  Information about disk failures is gathered through commands against disks with the
                  following exceptions:
                  – IDE disks do support only Disk failure predicted queries
                  – AIX SCSI disks do not support failures and predicted failures queries
                  Triggered action where you specify the action that must be executed. If you choose to run
                  a script, it will receive several positional parameters that depends on the triggering
                  condition. The parameters display on the Specify Script panel, which is accessed by
                  checking Run Script and clicking the Define button.


560   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-45 shows the parameters passed to the script for a RAM decreased condition.




Figure 13-45 Computer alerts - RAM decreased script parameters

Figure 13-46 shows the parameters passed to the script for a Disk not found condition.




Figure 13-46 Computer alerts - Disk not found script parameters


Computers tab
This limits the Alert process to specific computers or computer Groups (Figure 13-47).




Figure 13-47 Computer alerts - Computers tab


                                  Chapter 13. Using TotalStorage Productivity Center for Data   561
13.4.3 Filesystem Alerts
              Filesystem Alerts will act on the output of Probe and Scan jobs and generate an Alert for each
              filesystem that meets the specified threshold. Figure 13-48 shows the configuration screen for
              a Filesystem Alert.




              Figure 13-48 Filesystem Alerts - Alert


              Alerts tab
              As for Computer Alerts, the Alerts tab contains two parts. In the Triggering condition section
              you can specify to be alerted if a:
                  Filesystem is not found, which means the filesystem was not mounted during the most
                  recent Probe or Scan.
                  Filesystem is reconfigured.
                  Filesystem free space is less than a threshold specified in percent, KB, MB, or GB.
                  Free UNIX filesystem inode count is less than a threshold (either percent or inodes count).




562   IBM TotalStorage Productivity Center V2.3: Getting Started
You can choose to run a script (click the Define button next to Run Script), or you can also
           change the content of the default generated mail by clicking Edit Email. You will see a popup
           with the default mail skeleton which is editable. Figure 13-49 shows the default e-mail
           message.




           Figure 13-49 Filesystem alert - Freespace default mail


13.4.4 Directory Alerts
           Directory Alerts will act on the output of Scan jobs.

           Alerts tab
           Directory Alerts configuration is similar to Filesystem alerts. The supported triggers are:
              Directory not found
              Directory consumes more than the specified threshold set in percent, KB, MB or GB.

           Directories tab
           Since Probe jobs do not report on directories and Scan jobs report only on directories. if a
           directory Profile has been assigned (See “Putting it all together” on page 555) you can only
           choose to be alerted for any directory that has already been included in a Scan and actually
           scanned.




                                             Chapter 13. Using TotalStorage Productivity Center for Data   563
13.4.5 Alert logs
              The Data Manager → Alerting → Alert log menu (Figure 13-50) lists all Alerts that have
              been generated.




              Figure 13-50 Alerts log

              There are nine different views. Each of them will show only the Alerts related to the selected
              view except:
                  All view - Shows all Alerts
                  Alerts Directed to <logged user> - Shows all Alerts where the current logged user has
                  been specified in the Login notification field

              When you click the icon on the left of a listed Alert, you will see detailed information on the
              selected Alert as shown in Figure 13-51.




564   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-51 Detailed Alert information



13.5 Policy management
          The Policy Management functions of Data Manager enable you to:
             Define space limits (Quotas) on storage resources used by user IDs and user groups.
             These limits can be set at a network (whole environment), computer, and filesystem level.
             To define space limits (Quotas) on NAS resources used by user IDs and user groups.
             To perform checks (Constraints) on specific files owned by the users and perform any
             action on those files.
             Define a filesystem extension policy can be used to automatically increase filesystem
             capacity for managed hosts when utilization reaches a specified level. The LUN
             provisioning option can be enabled to extend filesystems within an ESS.
             To schedule scripts against your storage resources.


13.5.1 Quotas
          Quotas can be set at either a user or at an OS User Group level. For the OS User Group
          level, this could be either an OS User Group, (see “OS User Group Groups” on page 540), or
          a standard OS group (such as system on UNIX, or Administrators on Windows). The User
          Quotas trigger an action when one of the monitored users has reached the limit while the OS
          User group Quotas trigger the action when the sum of space used by all users of monitored
          groups has reached the limit. The Quotas definition mechanism is the same for both except
          for the following differences:
             The menu tree to use:
             – Data Manager → Policy Management → Quotas → User
             – Data Manager → Policy Management → Quotas → OS User group


                                            Chapter 13. Using TotalStorage Productivity Center for Data   565
The monitored elements you can specify:
                  – User and user groups for User Quotas
                  – OS User Group and OS User Group Groups for OS User Group Quota

              We will show how to configure User Quotas. User Group Quotas are configured similarly.

              Note that the Quota enforcement is soft - that is, users are not automatically prevented from
              exceeding their defined Quota, but the defined actions will trigger if that happens. There are
              three sub-entries for Quotas: Network Quotas, Computer Quotas, and Filesystem Quotas

              Network Quotas
              A Network Quota defines the maximum cumulated space a user can occupy on all the
              scanned servers. An Alert will be triggered for each user that exceeds the limit specified in the
              Quota definition.

              Use Data Manager → Policy Management → Quotas → User → Network, right-click
              and select Create Quota to create a new Quota. The right pane displays the Quota
              configuration screen with four tabs.

              Users tab
              Figure 13-52 shows the Users tab for Network Quotas.




              Figure 13-52 User Network Quotas - Users tab

              From the Available column, select any user ID or OS User Group you want to monitor for
              space usage.

              The Profile pull-down menu is used to specify the file types that will be subject to the Quota.
              The list will display all Profiles that create summaries by user (by file owner). Select the
              Profile you want to use from the pull-down. The default Profile Summary by Owner collects
              information about all files and summarizes them on the user level. The ALLGIFFILES profile
              collects information about GIF files and creates a summary at a user level as displayed in
              Figure 13-53. This (non-default) profile was created using the process shown in 13.3.6,
              “Profiles” on page 547.



566   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-53 Profile with user summary

Using this profile option, we can define general Quotas for all files and more restrictive
Quotas for some multimedia files such as GIF and MP3.

Filesystem tab
On the Filesystem tab shown Figure 13-54, select the filesystems or computers you want to
be included in the space usage for Quota management.




Figure 13-54 User Network Quotas - Filesystem tab

In this configuration, for each user, his cumulated space usage on all servers will be
calculated and checked against the Quota limit.

                                 Chapter 13. Using TotalStorage Productivity Center for Data   567
When to check
              The Quota management is based on the output of the Scan jobs. Therefore, each Quota
              definition must be scheduled to run after the Scan jobs that collect the adequate information.

              The When to CHECK tab is standard, and allows you to define a one off or a recurring job.

              Alert tab
              On the Alert tab, specify the Quota limit in: KB, MB or GB, and the action to run when the
              Quota is exceeded.




              Figure 13-55 User Network Quotas - Alert tab

              You can choose from the standard Alerts type available with TotalStorage Productivity Center
              for Data. Each Alert will be fired once for each user exceeding their Quota. We have selected
              to run a script that we wrote, QUOTAUSERNET.BAT, listed in Example 13-2.

              Example 13-2 QUOTAUSERNET.BAT script
              echo NETWORK quota exceeded - %1 %2 uses %3 - Limit set to %4 >>quotausernet.txt


              Example 13-3 shows the output file created by QUOTAUSERNET.BAT.

              Example 13-3 Content of quotausernet.txt
              NETWORK quota exceeded - user root uses 11.16GB - Limit set to 5.0GB
              NETWORK quota exceeded - user Administrators@BUILTIN uses 11.97GB - Limit set to 5GB


              The Alert has fired for user root and Administrators. This clearly shows that administrative
              users such as root and Administrators should not normally be included in standard Quotas
              monitoring.




568   IBM TotalStorage Productivity Center V2.3: Getting Started
Computer Quotas
Computer Quotas enable you to fire Alerts when a user exceeds their space Quota on a
specific computer as shown in Figure 13-56. Multiple Alerts are generated if a user violates
the Quota on separate computers.




Figure 13-56 Computer Quota - Alerts log


Filesystem Quotas
A Filesystem Quota defines a space usage limit at the filesystem level. An Alert will be fired
for each filesystem where a user exceeds the limit specified in the Quota definition.

Use Data Manager → Policy Management → Quota → User → Filesystem, right-click,
and select Create Quota to create a new Quota. After setting up and running a Quota for
selected filesystems, we received the following entries in the Alert History, shown in
Figure 13-57.




Figure 13-57 Filesystem Quota - Alerts log



                                 Chapter 13. Using TotalStorage Productivity Center for Data   569
13.5.2 Network Appliance Quotas
              Using Data Manager → Policy Management → Network Appliance Quotas →
              Schedules, you can compare the space used by users against Quotas defined inside
              Network Appliance filers, using the appropriate software, and raise an Alert whenever a user
              is close to reaching the NetApp Quota.

              When you run a Network Appliance Quota job, the NetApp Quota definitions will be imported
              into TotalStorage Productivity Center for Data for read-only purposes.

                Note: Network Appliance Quotas jobs must be scheduled after the Scan jobs, since they
                use the statistics gathered by the latest Scan to trigger any NetApp Quota violation.

              With Data Manager → Policy Management → Network Appliance Quotas → Imported
              User Quotas and Imported OS User Group Quotas, you can view the definitions of the
              Quotas defined on your NetApp filers.


13.5.3 Constraints
              The main features of Constraints are listed in Figure 13-58.




                    Constraints
                    Reports and triggers actions based on specific files which use
                    too much space on monitored servers

                    Files can be selected based on
                         server and filesystem
                         name pattern (eg: *.mp3, *.avi)
                         owner
                         age
                         size
                         attributes

                    Actions triggered through standard Alerting mechanism when
                    total space used by files exceeds a threshold


                                                                              ibm.com/redbooks


              Figure 13-58 Constraints

              Constraints are used to generate Alerts when files matching specified criteria are consuming
              too much space on the monitored servers.




570   IBM TotalStorage Productivity Center V2.3: Getting Started
Constraints provide a deeper level of Data Management. Quotas will allow reporting on users
who have exceeded their space limitations. With Constraints, we can get more detailed
information to specify limits on particular file types or other attributes, such as owner, age,
and so on. The output of a Constraint when applied to a Scan will return a list of the files that
are consuming too much space.

 Note: Unlike Quotas, Constraints are automatically checked during Scan jobs and do not
 need to be scheduled. Also, the Scan does not need to be associated with Profiles that will
 cause data to be stored for reporting.


Filesystems tab
The Filesystems tab helps you select the computers and filesystems you want to check for
the current Constraint. The selection method for computers and filesystems is the same as
for Scan jobs (see 13.3.7, “Scans” on page 552).

File Types tab
On the File Types tab, you can explicitly allow or disallow certain file patterns (Figure 13-59).




Figure 13-59 Constraint - File Types

Use the buttons on the top of the screen, to allow or forbid files depending on their name. The
left column shows some default file patterns, or you can use the bottom field to create your
own pattern. Click >> to add your pattern to the allowed/forbidden files.




                                  Chapter 13. Using TotalStorage Productivity Center for Data   571
Users tab
              The Users tab (shown in Figure 13-60) is used to allow or restrict the selected users in the
              Constraint.




              Figure 13-60 Constraint - Users


                Important: The file condition is logically ORed with the User condition. A file will be
                selected for Constraint processing if it meets at least one of the conditions.


              Options tab
              The Options tab provides additional conditions for file selection, and limits the number of
              selected files to store in the central repository.

              Once again, the conditions added in the tab will be logically ORed with the previous set in the
              File Types and Users tab.




572   IBM TotalStorage Productivity Center V2.3: Getting Started
The bottom part of the tab, shown in Figure 13-61, contains the textual form of the Condition,
taking into account all the entries made in the Filesystems, File Types, Users and Options
tabs.




Figure 13-61 Constraints - Options

You can change this condition or add additional conditions, by using the Edit Filter button. It
displays the file filter popup (Figure 13-62) to change, add, and remove conditions or
conditions groups as previously explained in 13.3.6, “Profiles” on page 547.




Figure 13-62 Constraints - File filter




                                    Chapter 13. Using TotalStorage Productivity Center for Data   573
We changed the file filter to a more appropriate one by changing the OR operator to AND
              (Figure 13-63).




              Figure 13-63 Constraints - File filter changed


              Alert tab
              After selecting the files, you may want to generate an Alert only if the total used space
              meeting the Constraint conditions exceeds a predefined limit. Use the Alert tab to specify
              the triggering condition and action (Figure 13-64).




              Figure 13-64 Constraints - Alert




574   IBM TotalStorage Productivity Center V2.3: Getting Started
In our Constraint definition, a script is triggered for each filesystem where the selected files
exceed one Gigabyte. We select the script by checking the Run Script option and selecting
Change... as shown in Figure 13-65. The script will be passed several parameters including a
path to a file that contains the list of files meeting the Constraint. You can use this list to
execute any action including delete or archive commands.




Figure 13-65 Constraints - Script parameters

Our example uses a sample script (tsm_arch_del.vbs) which is shipped with TotalStorage
Productivity Center for Data, which archives all the files in the produced list to a Tivoli Storage
Manager server, and then deletes them from local storage. This script is installed with
TotalStorage Productivity Center for Data server, and stored in the scripts subdirectory of the
server installation. It can be edited or customized if required - we recommend that you save
the original files first.

Versions for Windows (tsm_arch_del.vbs) and UNIX (tsm_arch_del) are provided. If you will
run this Constraint on a UNIX agent, then PERL is required to be installed on the agent. A
Tivoli Storage Manager server must be available and configured for this script to work. For
more information on the sample scripts, see Appendix A of the IBM Tivoli Storage Resource
Manager User’s Guide, SC32-9069.




                                  Chapter 13. Using TotalStorage Productivity Center for Data   575
13.5.4 Filesystem extension and LUN provisioning
              The main functions of Filesystem Extension are shown in Figure 13-66.




                 Filesystem Extension


                   Automates filesystem extension

                   Supported platforms
                        AIX using JFS
                        SUN using VxFS

                   Support for automatic LUN provisioning with IBM ESS Storage
                   Subsystem

                   Actions triggered through standard Alerting mechanism when
                   a filesystem is performed




                                                                             ibm.com/redbooks


              Figure 13-66 Filesystem Extension




576   IBM TotalStorage Productivity Center V2.3: Getting Started
We use filesystem extension policy to automatically extend filesystems when utilization
reaches a specified threshold. We can also enable LUN provisioning to extend filesystems
within an ESS.

To set up filesystem extension policy, select Data Manager → Policy Management →
Filesystem Extension. Right-click Filesystem Extension and select Create Filesystem
Extension Rules as seen in Figure 13-67.




Figure 13-67 Create Filesystem Extension Rules




                                Chapter 13. Using TotalStorage Productivity Center for Data   577
In the Filesystems tab, select the filesystems which will use filesystem extension policy by
              moving them to the Current Selections panel.

              Note the Enabled checkbox - the default is to check it, meaning the rule will be active. If you
              uncheck the box, it will toggle to Disabled - you can still save the rule, but the job will not run.

              To specify the extension parameters, select the Extension tab (Figure 13-68).




              Figure 13-68 Filesystem Extension - Extension

              This tab specifies how a filesystem will be extended. An explanation of the fields is provided
              below.

              Amount to Extend
              We have the following options:
                  Add - the amount of space used for extension in MB or GB, or as a percentage of
                  filesystem capacity.
                  Make Freespace - the amount of freespace that will be maintained in the filesystems by
                  this policy. If freespace falls below the amount that is specified, the difference will be
                  added. Freespace can be specified in MB or GB increments, or by a percentage of
                  filesystem capacity.
                  Make Capacity - the total capacity that will be maintained in the selected filesystems. If
                  the capacity falls below the amount specified, the difference will be added.




578   IBM TotalStorage Productivity Center V2.3: Getting Started
Limit Maximum Filesystem Capacity?
When this option is enabled, the Filesystem Maximum Capacity is used in conjunction with
the Add or Make Freespace under Amount to Extend. If you enter a maximum capacity for a
filesystem in the Filesystem Maximum Capacity field and the filesystem reaches the
specified size, the filesystem will be removed from the policy and an Alert will be triggered.

Condition for Filesystem Extension
The options are:
   Extend filesystems regardless of remaining freespace - the filesystem will be
   expanded regardless of the available free space.
   Extend filesystems when freespace is less than - defines the threshold for the
   freespace which will be used to trigger the filesystem expansion. If freespace falls below
   this value, the policy will be executed. Freespace can be specified in MB or GB
   increments, or by a percentage of filesystem capacity.

 Note: If you select Make Capacity under Amount to Extend, the Extend filesystems
 when freespace is less than option is not available.


Use LOG ONLY Mode
Enable Do Not Extend Filesystems - Log Only when you want the policy to log the
filesystem extension. The extension actions that would have taken place are written to the log
file, but no extension takes place.

In the Provisioning tab (Figure 13-69) we define LUN provision parameters. Note that LUN
provisioning is available at the time of writing for filesystems on an ESS only.




Figure 13-69 Filesystem Extension - Provisioning




                                 Chapter 13. Using TotalStorage Productivity Center for Data   579
LUN Provisioning is an optional feature for filesystem extension. When the Enable
              Automatic LUN Provisioning is selected, LUN provisioning is enabled.

              In the Create LUNs that are at least field, you can specify a minimum size for new LUNs. If
              you select this option, LUNs of at least the size specified will be created. If no size is
              specified, then the Amount to Extend option specified for the filesystem (in “Amount to
              Extend” on page 578) will be used. For more information on LUN provisioning, see IBM Tivoli
              Storage Resource Manager 1.2 User’s Guide.

              The Model for New LUNs feature means that new LUNs will be created similar to existing
              LUNs in your setup. At least one ESS LUN must be currently assigned to the TotalStorage
              Productivity Center for Data Agent associated with the filesystem you want to extend. There
              are two options for LUN modeling:
                  Model new LUNs on others in the volume group of the filesystem being extended -
                  provisioned LUNS are modeled on existing LUNs in the extended filesystem’s volume
                  group.
                  Model new LUNs on others on the same host as the filesystem being extended -
                  provisioned LUNS are modeled on existing LUNs in the extended filesystem’s volume
                  group. If the corresponding LUN model cannot satisfy the requirements. it will look for
                  other LUNs on the same host.

              The LUN Source option defines the location of the new LUN in the ESS, and has two options:
                  Same Storage Pool - provisioned LUNs will be created using space in an existing
                  Storage Pool. In ESS terminology this is called the Logical Sub System or LSS.
                  Same Storage Subsystem - provisioned LUNs can be created in any Storage Pool or
                  ESS LSS.

              The When to Enforce Policy tab (Figure 13-70) specifies when to apply the filesystem
              extension policy to the selected filesystems.




              Figure 13-70 When to Enforce Policy tab




580   IBM TotalStorage Productivity Center V2.3: Getting Started
The options are:

Enforce Policy after every Probe or Scan automatically enforces the policy after every
Probe or Scan job. The policy will stay in effect until you either change this setting or disable
the policy.

Enforce Policy Now enforces the policy immediately for a single instance.

Enforce Policy Once at enforces the policy once at the specified time, specifying the month,
day, year, hour, minute, and AM/PM

The Alert tab (Figure 13-71) can define an Alert that will be triggered by the filesystem
extension job.




Figure 13-71 Alert tab




                                 Chapter 13. Using TotalStorage Productivity Center for Data   581
Currently the only available condition is A filesystem extension action started
              automatically.

              Refer to “Alert tab” on page 544 for an explanation of the definitions.

                Important: After making configuration changes to any of the above filesystem extension
                options, you must save the policy, as shown in Figure 13-72. If you selected Enforce
                Policy Now, the policy will be executed after saving.




              Figure 13-72 Save filesystem changes

              For more information on filesystem extension and LUN provisioning, see IBM Tivoli Storage
              Resource Manager: A Practical Introduction.


13.5.5 Scheduled Actions
              TotalStorage Productivity Center for Data comes with an integrated tool to schedule script
              execution on any of the Agents. If a script fails due to an unreachable Agent, the standard
              Alert processes can be used. To create a Scheduled action, select Data Manager → Policy
              Management → Scheduled Actions → Scripts, right-click and select Create Script.

              Computers tab
              On the Computers tab, select the computers or computer groups to execute the script.

              Script Options tab
              From the pull-down field, select a script that exists on the server. You can also enter the
              name of a script not yet existing on the server or that only resides on the Agents.




582   IBM TotalStorage Productivity Center V2.3: Getting Started
The Script options tab is shown in Figure 13-73.




        Figure 13-73 Scheduled action - Script options

        The Script Name pull-down field lists all files (including non-script files) in the servers’ script
        directory.

         Attention: For Windows Agents, the script must have an extension that has an associated
         script engine on the computer running the script (for example: .BAT, .CMD, or .VBS).

         For UNIX Agents:
            The extension is removed from the specified script name
            The path to the shell (for example, /bin/bsh, /bin/ksh) must be specified in the first line of
            the script
            If the script is located in a Windows TotalStorage Productivity Center for Data Server
            scripts directory, the script must have been created on a UNIX platform, and then
            transferred in binary mode to the Server or you can use UNIX OS tools such as
            dos2unix to convert the scripts. This will ensure that the CR/LF characters are correctly
            inserted for execution under UNIX.


        When to Run tab
        As for other Data Manager jobs, you can choose to run a script once or repeatedly at a
        predefined interval.

        Alerts tab
        With the Alert tab you can choose to be notified when a script fails due to an unreachable
        Agent or a script not found condition. The standard Alert Mechanism described in 13.4, “OS
        Alerts” on page 555 is used.


13.6 Database monitoring
        The Monitoring functions of Data Manager are extended to databases when the license key
        is enabled (8.4, “Configuring Data Manager for Databases” on page 313). Currently, MS
        SQL-Server, Oracle, DB2, and Sybase are supported.


                                          Chapter 13. Using TotalStorage Productivity Center for Data   583
We will now review the Groups, Probes, Scans, and Profiles definitions for Data Manager for
              Databases, and show the main differences compared to the core Data Manager monitoring
              functions.

              Figure 13-74 shows the navigation tree for Data Manager for Databases.




              Figure 13-74 Databases - Navigation Tree


13.6.1 Groups
              To get targeted monitoring of your database assets, you can create Groups consisting of:
                  Computers
                  Databases-Tablespaces
                  Tables
                  Users

              Computer Groups
              All databases residing on the selected computers will be probed, scanned, and managed for
              Quotas.

              The groups you have created using TotalStorage Productivity Center for Data remain
              available for TotalStorage Productivity Center for Data for Databases. If you create a new
              Group, the computers you put in it will be removed from the Group they currently belong to.

              To create a Computer Group, use Data Manager - Databases → Monitoring → Groups
               → Computer, right-click, and select Create Computer Group.

              “Computer Groups” on page 536 gives more information on creating Computer Groups.

              Databases-Tablespaces Groups
              Creating Groups with specific databases and tablespaces may be useful for applying identical
              management rules for databases with the same functional role within your enterprise.

584   IBM TotalStorage Productivity Center V2.3: Getting Started
An example could be to create a group with all the Oracle-Server system databases, as you
          will probably apply the same rules for space and alerting on those databases. This is shown
          in Figure 13-75.




          Figure 13-75 Database group definition


          Table Groups
          You can use Table Groups to create Groups of the same set of tables for selected or all
          database instances.

          You can use two different views to create a table group:
             Tables by instance selects several tables for one instance.
             Instances by table selects several instances for one table.

          You can combine both views as each entry you add will be added to the group.

          User Groups
          As for core TotalStorage Productivity Center for Data, you can put user IDs in groups. The
          user groups you create will be available for the whole TotalStorage Productivity Center for
          Data product set.

           Tip: The Oracle and MS SQL-Server user IDs (SYSTEM, sa, ...) are also included in the
           available users list after the first database Probe.


13.6.2 Probes
          The Probe process is used to gather data about the files, instances, logs, and objects that
          make up monitored databases. The results of Probe jobs are stored in the repository and are
          used to supply the data necessary for Asset Reporting.

          Use Data Manager - Databases → Monitoring → Probe, right-click, and select Create
          Probe to define a new Probe job. In the Instance tab of the Probe configuration, you can
          select specific instances, computers, and computer groups (Figure 13-76).




                                           Chapter 13. Using TotalStorage Productivity Center for Data   585
Figure 13-76 Database Probe definition

              The Computers list contains only computers that have been defined for Data Manager for
              Databases. The definition procedure is described in “Configuring Data Manager for
              Databases” on page 313.


13.6.3 Profiles
              As for TotalStorage Productivity Center for Data, Profiles in Data Manager for Databases are
              used to determine the databases attributes that are to be scanned. They also determine the
              summary level and retention time to keep in the repository.

              Use Data Manager - Databases → Monitoring → Profiles, right-click, and select Create
              Profile to define a new profile. Figure 13-77 shows the Profile definition screen.




              Figure 13-77 Database profile definition


586   IBM TotalStorage Productivity Center V2.3: Getting Started
You can choose to gather data on tables size, database extents, or database free space and
          summarize the results at the database or user level.


13.6.4 Scans
          Scan jobs in Data Manager for Databases collect statistics about the storage usage and
          trends within your databases. The gathered data is used as input to the usage reporting and
          Quota analysis.

          Defining a Scan job requires defining:
               The database, computer, and instances to Scan
               The tables to monitor for detailed information such as size, used space, indexes, rows
               count
               The profile that will determine the data that is gathered and the report views that will be
               made available by the Scan
               The job scheduling frequency
               Oracle-only additional options to gather information about pages allocated to a segment
               that has enough free space for additional rows
               The alerting mechanism to use should the Scan fail

          All this information is set through the Scan definition screen that contains one tab for each
          previously listed item. To define a new Scan, select Data Manager - Databases →
          Monitoring → Scans, right-click and select Create Scan as in Figure 13-78.




          Figure 13-78 Database Scan definition


           Note: If you request detailed scanning of tables, the tables will only be scanned if their
           respective databases have also been selected for scanning.



                                            Chapter 13. Using TotalStorage Productivity Center for Data   587
13.7 Database Alerts
              TotalStorage Productivity Center for Data for Databases enables you to define Alerts on
              instances, databases, and tables. The Probe and Scan jobs output are processed and
              compared to the defined alerts. If a threshold is reached, an Alert will be triggered.

              Tivoli Storage Resource Manage for Databases uses the standard Alert mechanisms
              described in 13.4, “OS Alerts” on page 555.


13.7.1 Instance Alerts
              Data Manager - Databases → Alerting → Instance Alerts, right-click and select Create
              Alert lets you define some alerts as shown in Table 13-3. Those Alerts are triggered during
              the Probe process.

              Table 13-3 Instance Alerts
                Alert type                               Oracle         Sybase             MSSQL

                New database discovered                                 x                  x

                New tablespace discovered                x

                Archive log contains more than X units   x

                New device discovered                                   x

                Device dropped                                          x

                Device free space greater than X units                  x

                Device free space less than X units                     x

              An interesting Alert is the Archive Log Directory Contains More Than for Oracle, since the
              Oracle application can hang if there is no more space available for its archive log. This Alert
              can be used to monitor the space used in this specific directory and trigger a script that will
              archive the files to an external manager such as Tivoli Storage Manager once the predefined
              threshold is reached. For a detailed example, refer to IBM Tivoli Storage Resource Manager:
              A Practical Introduction, SG24-6886.


13.7.2 Database-Tablespace Alerts
              To define a Database-Tablespace Alert, select Data Manager - Databases → Alerting →
              Database-Tablespace Alerts, right-click, and select Create Alert. You can define various
              monitoring options on your databases as shown in Table 13-4. Those Alerts are triggered
              during the Probe process.

              Table 13-4 Instance alerts
                Alert type                                         Oracle        Sybase        MSSQL

                Database/Tablespace freespace lower than           x             x             x

                Database/Tablespace offline                        x             x             x

                Database/Tablespace dropped                        x             x             x

                Freespace fragmented in more than n extents        x

                Largest free extent lower than                     x




588   IBM TotalStorage Productivity Center V2.3: Getting Started
Alert type                                            Oracle         Sybase           MSSQL

            Database Log freespace lower than                                    x                x

            Last dump time previous to n days                                    x




13.7.3 Table Alerts
           To define a new Table Alert, use Data Manager - Databases → Alerting → Table Alerts,
           right-click, and select Create Alert. With this option you can set up monitoring on database
           tables. The Alerts that can be triggered for a table as shown in Table 13-5 below. Those Alerts
           are triggered during the Scan processes and only if the Scan includes a Table Group.

           Table 13-5 Table alerts
            Alert type                                     Oracle           Sybase            MsSQL

            Total Table Size Greater Than                  x                 x                x

            Table Dropped                                  x                 x                x

            (Max Extents - Allocated) <                    x
            Segment Has More Than                          x

            Chained Row Count Greater Than                 x

            Empty Used Segment Space Exceeds               x

            Forwarded Row Count Greater Than                                x


13.7.4 Alert log
           The Data Manager - Databases → Alerting → Alert Log menu lists all Alerts that have
           been fired by the Probe jobs, the Scan jobs, the defined Alerts, and the violated Quotas.

            Tip: Please refer to 13.4.5, “Alert logs” on page 564 for more information about using the
            Alert log tree.



13.8 Databases policy management
           The Policy Management functions of Data Manager for Databases enable you to:
              Define space limits (Quotas) on database space used by tables owners. Those limits can
              be set at a network (whole environment), at an instance or at a database level.
              Schedule scripts against your database resources.




                                            Chapter 13. Using TotalStorage Productivity Center for Data   589
13.8.1 Network Quotas
              A Network Quota will define the maximum cumulated space a user can occupy on all the
              scanned databases. An Alert will be fired for each user that exceeds the limit specified in the
              Quota definition.

              We used Data Manager - Databases → Policy Management → Quotas → Network,
              right-click and select Create Quota to create a new Quota. The right pane will switch to a
              Quota configuration screen with four tabs.

              Users tab
              On the Users tab, specify the database users you want to be monitored for Quotas. You can
              also select a profile in the Profile pull-down field on the top right of the tab. In this field, you
              can select any Profile that stores summary data on a user level. The Quota will only be fired
              for databases that have been scanned using this Profile (Figure 13-79).




              Figure 13-79 Database Quota - Users tab

              Database-Tablespace tab
              Use this tab to restrict Quota checking to certain databases. You can choose several
              databases or computers. If you choose a computer, all the databases running on it will be
              included for Quota management.

              When to run tab
              As with Data Manager, you can select the time to run from:
                  Immediate
                  Once at a schedule date and time
                  Repetitive at predefined intervals

              Alert tab
              On the Alert tab you can specify the space limit allowed for each user and the action to run. If
              no action is selected, the Quota violation will only be logged in the Alert log.




590   IBM TotalStorage Productivity Center V2.3: Getting Started
13.8.2 Instance Quota
           The Instance Quota mechanism is similar to the Network Quota, except that it is set at the
           instance level. Whenever a user reaches the Quota on one instance, an Alert will be fired.


13.8.3 Database Quota
           With Database Quota, the Quota is set at the database level. Each monitored user will be
           reported back as soon as he reaches the limit on at least one of the monitored database.



13.9 Database administration samples
           We now list some typical checks done regularly by Oracle database administrators and show
           how they can be automated using Data Manager for Databases.


13.9.1 Database up
           Data Manager for Databases can be used to test for database availability using Probe and
           Scan jobs since they will fail and trigger an Alert if either the database or the listener is not
           available. Since those jobs use system resources to execute, you may instead choose
           scheduled scripts to test for database availability.

           Due to limited scheduling options and the need for user-written scripts, we recommend using
           dedicated monitoring products such as Tivoli Monitoring for Databases.


13.9.2 Database utilization
           There are a number of different levels where system utilization can be monitored and
           checked in a database environment.

           Tablespace space usage
           This is a standard Alert provided by Data Manager for Databases. This Alert will be triggered
           by the Probe jobs.

           Archive log directory space usage
           This is a standard alert provided by Data Manager for Databases. This Alert will be triggered
           by the Probe jobs as shown in 13.7.1, “Instance Alerts” on page 588.

           Maximum extents used
           Your application may become unavailable if a table reaches its maximum allowed number of
           extents. This is an indicator that can be monitored using the (Max Events - Allocated Extents)
           < Table Alert.

13.9.3 Need for reorganization
           To ensure good application performance, it is important to be notified promptly if a database
           reorganization is required.

           Count of Used table extents
           You can monitor for table reorganization need using the table Alert trigger Segment has more
           than n extents.


                                             Chapter 13. Using TotalStorage Productivity Center for Data   591
Count of chained rows
              Chained rows can have an impact on database access performance. This issue can be
              monitored using the Chained Row Count Greater than table Alert trigger.

              Count of Used table extents
              You can monitor the need for table reorganization using the table Alert trigger Segment has
              more than n extents.

              Freelist count
              You cannot monitor the count of freelists in an Oracle table using Data Manager for
              Databases.



13.10 Data Manager reporting capabilities
              The reporting capabilities of Data Manager are very rich, with over 300 predefined views. You
              can see the data from a very high-level; for example, the total amount of free space available
              over the enterprise; or from a low-level, for example, the amount of free space available on a
              particular volume or a table in a database.

              The data can be displayed in tabular or graphical format, or can be exported as HTML,
              Comma Separated Variable (CSV), or formatted report files.

              The reporting function uses the data stored in the Data Manager repository. Therefore, in
              order for reporting to be accurate in terms of using current data, regular discovery, Ping,
              Probe, and Scan jobs must be scheduled. These jobs are discussed in 13.3, “OS Monitoring”
              on page 533.

              Figure 13-80 shows the Data Manager main screen with the reporting options highlighted.

              The Reporting sections are used for interactive reporting. They can be used to answer ad hoc
              questions such as, “How much free space is available on my UNIX systems?” Typically, you
              will start looking at data at a high-level and drill down to find specific detail. Much of the
              information can also be displayed in graphical form as well as in the default table form.

              The My Reports sections give you access to predefined reports. Some of these reports are
              pre-defined by Data Manager, others can be created by individual users saving reporting
              criteria in the Reporting options. You can also set up Batch Reports to create reports
              automatically on a schedule.

              My Reports will be covered in more detail in 13.14, “Creating customized reports” on
              page 683, and 13.15, “Setting up a schedule for daily reports” on page 697.

              The additional feature, TotalStorage Productivity Center for Data for Chargeback produces
              storage usage Chargeback data, as described in 13.17, “Charging for storage usage” on
              page 700.




592   IBM TotalStorage Productivity Center V2.3: Getting Started
Predefined reports provided by
                                                                            TotalStorage Productivity
                                                                            Center for Data
                                                                           Reports customized and saved
                                                                           by user tpcadmin

                                                                           Schedule reports to run in
                                                                           batch mode


                                                                           Interactive reporting options




                                                                            Database reporting options




           Figure 13-80 TotalStorage Productivity Center for Data main screen showing reporting options


13.10.1 Major reporting categories
           Data Manager collects data for reporting purposes in seven major categories. These will be
           covered in the following sections. Within each major category there are a number of
           sub-categories.

           Most categories are available for both operating system level reporting and database
           reporting. However, a few are for operating system reporting only. The description of each
           category specifies which applies, and in the more detailed following sections for each
           category, we present the capabilities separately for both Data Manager and Data Manager for
           Databases as appropriate.

           Asset Reporting
           Asset data is collected by Probe processes and reports on physical components such as
           systems, disk drives, and controllers. Currently, Asset Reporting down to the disk level is only
           available for locally attached devices. Asset Reporting is available for both operating system
           and database reporting.




                                             Chapter 13. Using TotalStorage Productivity Center for Data   593
Storage Subsystems Reporting
              Storage Subsystem data is collected by Probe processes. It provides a mechanism for viewing
              storage capacity at a computer, filesystem, storage subsystem, LUN, and disk level. These
              reports also enable you to view the relationships among the components of a storage
              subsystem. Storage Subsystem reporting is currently only available for IBM TotalStorage
              Enterprise Storage Servers (ESS). Storage Subsystems Reporting is available for operating
              system only.

              Availability Reporting
              Availability data is collected by Ping processes and allows you to report on the availability of
              your storage resources and computer systems. Availability Reporting is provided for
              operating system reporting only.

              Capacity Reporting
              Capacity Reporting shows how much storage you have and how much of it is being used.
              You can report at anywhere from an entire network level down to an individual filesystem.
              Capacity Reporting is provided for both operating system and database reporting.

              Usage Reporting
              Usage Reporting goes down a level from Capacity Reporting. It is concerned not so much
              with how much space is in use, but rather with how the space is actually being used for. For
              example, you can create a report that shows usage by user, or a wasted space report. You
              define what wasted space means, but it could be for example files of a particular type or files
              within a certain directory, which are more than 30 days old. Usage Reporting is provided for
              both operating system and database reporting.

              Usage Violation Reporting
              Usage Violation Reporting allows you to set up rules for the type and/or amount of data that
              can be stored, and then report on exceptions to those rules. For example, you could have a
              rule that says that MP3 and AVI files are not allowed to be stored on file servers. You can also
              set Quotas for how much space an individual user can consume. Note that usage violations
              are only softly enforced - Data Manager will not enforce the rules in real time, but will
              generate an exception report after the fact. Usage Violation Reporting is provided for both
              operating system and database reporting.

              Backup Reporting
              Backup Reporting identifies files that have not been backed up. Backup Reporting is provided
              for operating system reporting only.



13.11 Using the standard reporting functions
              This section discusses Data Manager’s standard reporting capabilities. Customized reporting
              is covered in 13.14, “Creating customized reports” on page 683.

              This section is not intended to cover exhaustively all of the reporting options available, as
              these are very numerous, and are covered in detail in the Reporting section of the manual
              IBM Tivoli Storage Resource Manager V1.1 Reference Guide SC32-9069. Instead, this
              section provides a basic overview of Data Manager reporting, with some examples of what
              types of reports can be produced, and additional information on some of the less
              straightforward reporting options.




594   IBM TotalStorage Productivity Center V2.3: Getting Started
To demonstrate the reporting capabilities of TotalStorage Productivity Center for Data, we
          installed the Server code on a Windows 2000 system called Colorado, and deployed these
          Windows Agents:
             Gallium
             Wisla
             Lochness
             Colorado is also an Agent as well as being the Server.

          The host GALLIUM has both Microsoft SQL-Server and Oracle database installed to
          demonstrate database reporting. The Agent on LOCHNESS also provides data for a NAS
          device call NAS200. The Agent on VMWAREW2KSRV1 also provides data for a NetWare
          server called ITSOSJNW6.

          The lab setup is shown in Figure 13-81.




               Tivoli Storage Resource Manager: Lab Environment

                                                                                                                       ITSRM
                                                                                                                        Scan
                                                                                            ITSRM
                                                                                           Database

                                             A23BLTZM
                                                WNT                                                     LOCHNESS
                                               ITSRM                                                       W2K
                                              Agent &                                                     ITSRM
                                                 GUI                                                      Server



                                                                       Ethernet




                        NetWare   VMWAREW2KSRV1              SOL-E            GALLIUM   CRETE           BRAZIL      IBM
                                                   EASTER
                                    W2K (Vmware)             Solaris            W2K       AIX             AIX      NAS200
                                                    HP-UX
                                       ITSRM                 ITSRM             ITSRM    ITSRM           ITSRM
                                                    ITSRM
                                        Agent                 Agent             Agent    Agent           Agent
                                                     Agent

                        ITSRM
                         Scan
                                                                                                      ibm.com/redbooks
                 VMWAREW2KSRV1
                   W2K (Vmware)
                     ITSRM
                      A   t
          Figure 13-81 TotalStorage Productivity Center for Data Lab Environment


13.11.1 Asset Reporting
          Asset Reporting provides configuration information for the TotalStorage Productivity Center
          for Data Agents. The information available includes typical asset details such as disk system
          name and disk capacities, but provides a large amount of additional detail.

          IBM TotalStorage Productivity Center for Data
          Figure 13-82 shows the major subtypes within Asset Reporting. Note that unlike the other
          reporting categories where most of the drill-down functions are chosen from the right-hand
          panel, in Asset Reporting the drill-down functions are mostly available on the left-hand pane.



                                                    Chapter 13. Using TotalStorage Productivity Center for Data                595
Figure 13-82 Reporting - Asset

              By Cluster View
              Click By Cluster to drill down into a virtual server or cluster node. You can drill down further
              to a specific controller to see the disks under it and/or drill down on a disk to see the file
              systems under it.

              By Computer view
              Click By Computer to see a list of all of the monitored systems (Figure 13-83.)




              Figure 13-83 Reporting - Asset - By Computer




596   IBM TotalStorage Productivity Center V2.3: Getting Started
From there we can drill down on the assets associated with each system. We will take a look
at node GALLIUM. In Figure 13-84 we have shown most of the items for GALLIUM expanded,
with the details for Disk 2 displayed in the right-hand bottom pane.

You will see a detailed level of information, both in terms of the type of objects for which data
is collected (for example, Exports or Shares), and the specific detail for a given device.




Figure 13-84 Report - GALLIUM assets

By OS Type view
This view of the Asset data provides the same information as the By Computer view, with the
difference that the Agent systems are displayed sorted by operating system platform.

By Storage Subsystem view
Data Manager provides reporting for storage subsystems, any disk array subsystems whose
SMI-S Providers are CTP certified by SNIA for SMI-S 1.0.2, and IBM SAN Volume Controller
clusters.

For disk array subsystems, you can view information about:
   –   Disk groups (for IBM TotalStorage ESS subsystems)
   –   Array sites (for IBM TotalStorage DS6000/8000 only)
   –   Ranks (for IBM TotalStorage DS6000/8000 only)
   –   Storage pools (for disk array subsystems)
   –   Disks (for disk array subsystems)
   –   LUNs (for disk array subsystems)

For IBM SAN Volume Controllers, you can view information about
   – Managed disk groups
   – Managed disks
   – Virtual disks




                                 Chapter 13. Using TotalStorage Productivity Center for Data   597
System-wide view
              The System-wide view however does provide additional capability, as it can give a
              System-wide view rather than a node-by-node view of some of the data. A graphical view of
              some of the data is also available. Figure 13-85 shows most of the options available from the
              System-wide view and in the main panel, the report of all exports or shares available.




              Figure 13-85 Reporting - Assets - System-wide view

              Each of the options available under the System-wide view are self explanatory with the
              possible exception of Monitored Directories. Data Manager can monitor utilization at a
              directory level as well as a device or filesystem level. However, by default, directory level
              monitoring is disabled.

              To enable directory monitoring, define a Directory Group by selecting Data Manager →
              Monitoring → Groups → Directory, right-click Directory and choose Create Directory
              Group. The process of setting up Directory Groups is discussed in more detail in 13.3.2,
              “Groups” on page 535. Once the Directory Group is created it must be assigned to a Scan
              job, and that job must be run on the systems where the directories to be monitored exist.

              By setting up a monitored directory you will get additional information for that directory. Note
              that the information collected includes any subdirectories. Information collected about the
              directory tree includes the number of files, number of subdirectories, total space used, and
              average file size. This can be graphed over time to determine space usage patterns.

              IBM TotalStorage Productivity Center for Data for Databases
              Asset Reporting for databases is similar to that for filesystems; however, filesystem entities
              like controllers, disks, filesystems, and shares are replaced with database instances,
              databases, tables, and data files.


598   IBM TotalStorage Productivity Center V2.3: Getting Started
Very specific information regarding an individual database is available as shown in
Figure 13-86 for the database DMCOSERV on node COLORADO.




Figure 13-86 DMCOSERV database asset details

Or you can see rollup information for all databases on a given system (using the System-wide
view) as shown in Figure 13-87.




Figure 13-87 System-wide view of database assets


                                Chapter 13. Using TotalStorage Productivity Center for Data   599
All of the database Asset Reporting options are quite straightforward with the exception of
              one. In order to receive table level asset information, one or more Table Groups needs to be
              defined. This is a similar process to that for Directory Groups as described in “System-wide
              view” on page 598.

              You would not typically include all database tables within Table Groups, but perhaps either
              critical or rapidly growing tables. We will set up a group for UDB.

              To set up a Table Group, Data Manager - Databases → Monitoring → Groups → Table,
              right-click Table and choose Create Table Group (Figure 13-88).




              Figure 13-88 Create a new database table group

              We have entered a description of Colorado Table Group. Now we click New Instance to
              enter the details of the database and tables that we want to monitor. From the drop down box,
              we select the database instance, in this case the UDB instance on Colorado. We then enter
              three tables in turn. For each table, we entered the database name (DMCOSERV), the creator
              name (db2admin) and a table name. After entering the values, click Add to enter more tables
              or finish. We entered the table names of BASEENTITY, DMSTORAGEPOOL, and DMVOLUME, as
              shown in Figure 13-89. Once all of the tables have been entered click OK.




600   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-89 Add UDB tables to table group

Now we return to the Create Table Group panel, and we see in Figure 13-90 the information
about the newly entered tables.




Figure 13-90 Tables added to table group

Now we Save by clicking the floppy disk icon and when prompted, we entered the Table
Group name of ColoradoTableGroup.

In order for the information for our tables to be collected, the Table Group needs to be
assigned to a Scan job. We will assign it to the default database scan job called
Tivoli.Default DB Scan by choosing Data Manager - Databases → Monitoring → Scans
 → TPCUser.Default Db Scan.




                                 Chapter 13. Using TotalStorage Productivity Center for Data   601
The definition for this scan job is shown in Figure 13-91 and in particular we see the Table
              Groups tab. Our new Table Group is shown initially in the left hand pane. We moved it to the
              right hand pane by selecting it and clicking >>. We then save the updates to the Scan job by
              choosing File → Save (or with the floppy disk icon from the tool bar). Finally, we can execute
              the Scan job by right-clicking it and choosing Run Now. Figure 13-91 shows the Scan job
              definition after the Table Group had been assigned to it.




              Figure 13-91 Table group added to scan job

              Example 13-4 is an extract from the Scan job log showing that the table information is now
              being collected. You can view the Scan job log through the TotalStorage Productivity Center
              for Data GUI by first expanding the particular Scan job definition. A list of Scan execution
              reports will be shown; select the one of interest. You may need to right-click the Scan job
              definition and choose Refresh Job List. The list of Scan executions for the Tivoli.Default DB
              Scan is shown in Figure 13-92.




              Figure 13-92 Displaying Scan job list



602   IBM TotalStorage Productivity Center V2.3: Getting Started
Once you have the actual job chosen you can click the detail icon for the system that you are
interested in to display the job log. The actual file specification of the log file on the Agent
system will be displayed at the top of the output when viewed through the GUI. Example 13-4
shows the actual file output.

Example 13-4 Database scan job showing table monitoring
09-19 18:01:01 DBA0036I: The following databases-tablespaces will be scanned:
                         MS SQLServer gallium/gallium Databases:
                            master
                            model
                            msdb
                            Northwind
                            pubs
                            tempdb
                         Oracle itsrm Tablespaces:
                            ITSRM.DRSYS
                            ITSRM.INDX
                            ITSRM.RBS
                            ITSRM.SYSTEM
                            ITSRM.TEMP
                            ITSRM.TOOLS
                            ITSRM.USERS
09-19 18:01:01 DBA0041I: Monitored Tables:
                             .CTXSYS.DR$OBJECT
                            Northwind.dbo.Employees
                            Northwind.dbo.Customers
                            Northwind.dbo.Suppliers


Finally, we can produce table level asset reports by choosing for example, Data Manager -
Databases → Reporting → Asset → System-wide → All DBMSs → Tables → By
Total Size. This is shown in Figure 13-93.




Figure 13-93 Tables by total size asset report




                                   Chapter 13. Using TotalStorage Productivity Center for Data   603
13.11.2 Storage Subsystems Reporting
              Storage Subsystems Reporting is covered in detail in 13.12, “TotalStorage Productivity
              Center for Data ESS Reporting” on page 634.


13.11.3 Availability Reporting
              Availability Reporting is quite simple. Two different sets of numbers are reported - Ping and
              Computer Uptime. Ping is only concerned with whether or not the system is up and
              responding to the ICMP requires - it does not care whether the Data Agent is running or not.
              Ping results are collected by a Ping job, so this must be scheduled to run on a regular basis.
              See 13.3.4, “Pings” on page 542.

              Computer Uptime detects whether or not the Data Agent is running. Computer Uptime
              statistics are gathered by a Probe job so this must be scheduled to run on a regular basis.
              See 13.3.5, “Probes” on page 545.

              Figure 13-94 shows the Ping report for our TotalStorage Productivity Center for Data
              environment, and Figure 13-95 shows the Computer Uptime report. To generate these
              reports, we had to select the computers of interest and select Generate Report.




              Figure 13-94 Reports - Availability - Ping




604   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-95 Reports - Availability - Computer Uptime


13.11.4 Capacity Reporting
           Capacity Reporting shows how much storage capacity is installed, and of that capacity, how
           much is being used and how much is available for future growth.

           IBM TotalStorage Productivity Center for Data
           There are four capacity report views within TotalStorage Productivity Center for Data:
              Disk Capacity
              Filesystem Capacity
              Filesystem Used Space
              Filesystem Free Space

           However, in reality there are really only two views, or perhaps three. The Filesystem Capacity
           and Filesystem Used Space views are nearly identical - the only differences being in the
           order of the columns and the row sort order.

           And there is relatively little difference between these two views and the Filesystem Free
           Space view. The Filesystem Capacity and Filesystem Used Space views report on used
           space, so include columns like percent used space whereas Filesystem Free Space includes
           columns like percent free space. All other data is identical.

           Therefore, there are really only two views: a Disk Capacity view and a Filesystem Capacity
           view.

           The Disk Capacity view provides information about physical or logical disk devices and what
           proportion of them has been allocated. Figure 13-96 shows the Disk Capacity by Disk
           selection window.




                                             Chapter 13. Using TotalStorage Productivity Center for Data   605
Figure 13-96 Disk capacity report selection window

              Often there is a one-to-one relationship between devices and filesystems as seen in
              Figure 13-97, particularly on Windows systems. However, if a single physical disk has two
              partitions the detailed description will show two partitions at the bottom of the right-hand
              pane.




              Figure 13-97 Capacity report - Gallium Disk 0



606   IBM TotalStorage Productivity Center V2.3: Getting Started
IBM TotalStorage Productivity Center for Data for Databases
          Capacity Reporting for databases is very straightforward. You can report on:
             All databases of any type
             All databases of a given type on a particular system or group of systems
             On a specific database

          Example 13-98 shows a Capacity Report by Computer Group. We actually have databases in
          just one Computer Group, WindowsDBServers. We then drilled down to see all systems within
          the WindowsDBServers group, then specifically to node GALLIUM, so that we could see all
          databases on GALLIUM.




          Figure 13-98 Database Capacity report by Computer Group


13.11.5 Usage Reporting
          The reporting categories covered so far have been mostly concerned with reporting at the
          system or device level. Usage Reporting goes down one more step to report at a level lower
          than the filesystem. You can produce reports that answer questions such as:
             How old is my data? When was it created, last accessed, or modified?
             What are my largest files? What are my largest directories?
             Do I have any orphan files?

          Data Manager
          With Usage Reporting, you will be able to:
             Identify orphan files and either update their ownership or delete them to free up space
             Identify the largest files and determine whether they are needed or whether parts of the
             data could be archived
             Identify obsolete files so that they can be either deleted or archived




                                          Chapter 13. Using TotalStorage Productivity Center for Data   607
There are a few restrictions on Usage Reporting:
                  In order to report by directory or by Directory Group you will need to set them up in Data
                  Manager → Monitoring → Groups → Directory.
                  UNIX systems do not record file create dates, so no reporting by creation time is available
                  for these systems.

              Data Manager for Databases
              Like database Asset Reporting, all of the database Usage Reporting options are quite
              straightforward with the exception of table level reporting.

              From a usage perspective there are two types of table report available:
                  Largest tables
                  Monitored tables

              We can report on database largest tables by choosing for example, Data Manager -
              Databases → Reporting → Usage → All DBMSs → Tables → Largest Tables → By
              RDBMS Type. This report is shown in Figure 13-99.




              Figure 13-99 Largest tables by RDBMS type




608   IBM TotalStorage Productivity Center V2.3: Getting Started
A Monitored Tables by RDBMS Type report is shown in Figure 13-100. In this case, only
tables which are part of a Table Group, which is included in a Scan job will be reported on.




Figure 13-100 Monitored tables by RDBMS type




                                Chapter 13. Using TotalStorage Productivity Center for Data   609
13.11.6 Usage Violation Reporting
              Usage Violation Reporting enforces Data Manager Constraints and Quotas. A Constraint is a
              limit, by file name syntax, on the type of data that can be stored on a system. A Quota is a
              storage usage limit placed on a user or operating system User Group, and can be defined at
              the network, computer, or filesystem level. Constraints and Quotas were described in 13.5,
              “Policy management” on page 565. It is important to remember that Quotas and Constraints
              are not hard limits - users will not be stopped from working if a Quota or Constraint is violated,
              but this event will trigger an exception, which will be reported.

              Data Manager Constraint Violation Reporting
              There are a number of predefined Constraints in Data Manager. Before we produce a
              Constraint violation report, we need to set up a new Constraint called forbidden files. Setting
              up Constraints was described in 13.5.3, “Constraints” on page 570.

              First navigate Data Manager → Policy Management → Constraints. Existing Constraints
              will be listed. Right-click Constraints and choose Create Constraint. On the Filesystems tab
              we entered a description of forbidden files, chose Computer Groups, then selected
              tpcadmin.Windows Systems and tpcadmin.Windows DB Systems and clicked >>. The
              completed Filesystems tab is shown in Figure 13-101.




              Figure 13-101 Create a Constraint - Filesystems tab




610   IBM TotalStorage Productivity Center V2.3: Getting Started
We then need to specify in the File Types tab, what a forbidden file is. You can define the
criteria as either inclusive or exclusive; that is, you can specify just those files types that will
violate the Constraint, or you can specify that all files will violate the Constraint except those
specified. There are a number of predefined file types included; you can also chose additional
files by entering appropriate values in the “Or enter a pattern field” at the bottom of the
form. We have chosen MP3 and AVI files. The completed File Types tab is shown in
Figure 13-102.




Figure 13-102 Create a Constraint - File Types tab

The Users tab is very similar to the File Types tab - you can specify which users should be
included or excluded from the selection criteria. We have taken the default, which is to include
all users.

In the Options tab, we nominate a maximum number of rows to be returned. We can also
apply some more specific selection criteria here such as only including files that are larger
than a defined size. Note, however that these criteria are added to the file list. For example, if
we specified here that we only wanted to include files greater than 1 MB, the search criteria
would be changed to ((NAME matches any of ('*.AVI', '*.mp3') AND TYPE <> DIRECTORY)
OR SIZE > 1 MB). So the returned list of files would be any file greater than 1 MB in size plus
any *.MP3 or *.AVI files.




                                  Chapter 13. Using TotalStorage Productivity Center for Data   611
If you wish to change the selection criteria so that instead you select any *.MP3 or *.AVI files
              that are larger than 1 MB, you can enter 1 MB against the bigger than option, and then click
              the Edit Filter button shown in Figure 13-105. You will then see the file filter as shown in
              Figure 13-103. To add the size criteria to the file type criteria, click the Size > 1MB entry and
              drag it up to the All of tag. The changed filter is shown in Figure 13-104. You can also see
              the Boolean expression for the filter has changed to reflect this condition.




              Figure 13-103 Edit a Constraint file filter - before change




              Figure 13-104 Edit a Constraint file filter - after change




612   IBM TotalStorage Productivity Center V2.3: Getting Started
In this case we did not want to apply a size criteria, so we left the Option tab entries at their
defaults as shown in Figure 13-105.




Figure 13-105 Create a Constraint - Options tab

Finally, we can specify that we want an Alert generated if a triggering condition is met. The
only choice here is to specify a maximum amount of space consumed by the files that meet
our selection criteria. We left all of the Alert tab options at their defaults other than specifying
an upper limit of 100 MB for files that have met our selection criteria.




                                  Chapter 13. Using TotalStorage Productivity Center for Data   613
The Alert tab is shown in Figure 13-106. Alerting is covered in more detail in 13.4, “OS Alerts”
              on page 555.




              Figure 13-106 Create a Constraint - Alert tab

              We then clicked the Save button and entered a name of Forbidden Files as shown in
              Figure 13-107.




              Figure 13-107 Create a Constraint - save

              Before we can report against the Constraint, we need to ensure that a Scan job has been run
              to collect the appropriate information.

              Once the Scan has completed successfully, you can go ahead and produce Constraint
              Violation Reports. Note that you cannot produce a report of violations of a particular
              Constraint - the report will include entries for any Constraint violation. However, once the
              report is generated, you can drill down into specific Constraint violations.

              We produced the report by choosing Data Manager → Reporting → Usage Violations →
              Constraint Violation → By Computer. You will see a screen like Figure 13-108 where you
              can select a subset of the clients if appropriate - after selecting, click Generate Report.




614   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-108 Constraint violation report selection screen

You will then see a list of all of those instances of Constraint violations as shown in
Figure 13-109.

The report shows multiple types of Constraints. Some of these Constraints were predefined
(Orphaned File Constraint and Obsolete File Constraint) and others (ALLFILES and forbidden
files) we defined. An orphaned file is any file that does not have an owner. This allows you to
easily identify files that belonged to users who have left your organization or have had an
incorrect ownership set.




Figure 13-109 Constraint violations by computer




                                   Chapter 13. Using TotalStorage Productivity Center for Data   615
From there you can drill down on a specific Constraint, then filesystems within the Constraint,
              and finally to a list of files that violated the Constraint on that filesystem by selecting the
              magnifying glass icon next to the entry of interest. Or, as shown in Figure 13-110, by clicking
              the pie chart icon next to the entry for forbidden files, you can produce a graph indicating what
              proportion of capacity is being utilized by files violating the Constraint. Position the cursor
              over any segment of the pie chart to show the percentage and number of bytes consumed by
              that segment.




              Figure 13-110 Graph of capacity used by Constraint violating files

              Constraint violations are also written to the Data Manager Alert Log. Figure 13-111 shows the
              same list of violations as if you had produced a Constraint Violations by computer report.




              Figure 13-111 Alert log showing Constraint violations




616   IBM TotalStorage Productivity Center V2.3: Getting Started
Quota Violation Reporting
The process of producing a Quota violation report is very similar to producing a Constraint
violation report, but with some key differences.

One difference between Quotas and Constraints is the process of collecting data. For
Constraints, the data is collected as part of a standard Scan job in a similar way to adding an
additional Profile to a Scan. Quota data collections are performed in a separately scheduled
job. So, when you set up a Quota you need to specify scheduling parameters.

We set up a Quota rule called Big Windows Users by choosing Data Manager → Policy
Management → Quotas → User → Computer, right-clicking Computer and selecting
Create Quota. On the Users screen we entered a description of Big Windows Users and then
selected User Groups and then TPCUser.Default User Group as show in Figure 13-112.




Figure 13-112 Create Quota - Users tab




                                Chapter 13. Using TotalStorage Productivity Center for Data   617
On the Computers tab we chose our Windows group: tpcadmin.Windows Systems
              (Figure 13-113).




              Figure 13-113 Create Quota - Computers tab

              We then had to specify when and how often we wanted the Quota job to run. We chose to run
              the job weekly under the When to CHECK tab as shown in Figure 13-114.




              Figure 13-114 Create Quota - When to Check


618   IBM TotalStorage Productivity Center V2.3: Getting Started
On the Alert tab, shown in Figure 13-115, we accepted all of the defaults other than to specify
the limit under User Consumes More Than, in this case, 1 GB.

No Alerts will be generated other than to log any exceptions in the Data Manager Alert Log.




Figure 13-115 Create Quota - Alert

Finally, we save the Quota definition, calling it Big Windows Users as shown in Figure 13-116.




Figure 13-116 Create Quota - save




                                 Chapter 13. Using TotalStorage Productivity Center for Data   619
The new Quota now appears under Data Manager → Policy Management → Quotas →
              User > Computer as tpcadmin.Big Windows Users (where tpcadmin is our Data Manager
              username). We right-clicked the Quota and chose Run Now as in Figure 13-117.




              Figure 13-117 Run new Quota job

              This job will collect data related to the Quota, and add any Quota Violations to the Alert Log
              as shown in Figure 13-118.




              Figure 13-118 Alert Log - Quota violations




620   IBM TotalStorage Productivity Center V2.3: Getting Started
We then drilled down on one of the Alerts to see the details (Figure 13-119).




Figure 13-119 Alert Log - Quota violation detail

And finally we can create a Quota Violation report by choosing Data Manager → Reporting
 → Usage Violations → Quota Violations → Computer Quotas → By Computer. The
high-level report is shown in Figure 13-120.




Figure 13-120 Quota violations by computer




                                   Chapter 13. Using TotalStorage Productivity Center for Data   621
We can then drill down further for additional detail or to produce a graphical representation of
              the data behind the violation. The graph in Figure 13-121 shows a breakdown of the users’
              data by file size.




              Figure 13-121 Quota violation graphical breakdown by file size


              Data Manager for Databases
              Filesystem Usage Violation Reporting includes both Quota and Constraint violations.
              However, for databases, only Quota violations are available.

              You can place a Quota on users, user groups, or all users and you can limit the Quota by
              computer, computer group, database instance, database tablespace group or tablespace.

              We will set up an Instance Quota that limits any individual user to 100 MB of space per
              instance for any database on any server in the tpcadmin.WindowsDBServers computer
              group.

              To do this, navigate to Data Manager - Databases → Policy Management → Quotas →
              Instance. Right-click Instance and choose Create Quota. Figure 13-122 shows the Quota
              definition screen. We entered a description of Big DB Users and selected the
              TPCUser.Default User Group by expanding User Groups, clicking TPCUser.Default User
              Group, and then clicking >>.




622   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-122 Create database Quota - Users tab

On the Instances tab, expand Computer Groups, select tpcadmin.Windows DB Systems
and then click >> to add it to the Current Selections as shown in Figure 13-123.




Figure 13-123 Create database Quota - Instances tab




                                Chapter 13. Using TotalStorage Productivity Center for Data   623
On the When to Run tab shown in Figure 13-124, we chose to run the Quota job weekly and
              chose a time of day for the job to run. Other values were left at the defaults.




              Figure 13-124 Create a database Quota - When to Run tab

              On the Alert tab (shown in Figure 13-125) we specified the actual Quota that we wanted
              enforced, which was a 100 MB per user Quota. Other values were left as defaults.




              Figure 13-125 Create a database Quota - Alert tab




624   IBM TotalStorage Productivity Center V2.3: Getting Started
We saved the new Quota definition with a name of Big DB Users as shown Figure 13-126.




Figure 13-126 Create a database Quota - Save

We now run the Quota by right-clicking it and choosing Run Now as seen in Figure 13-127.




Figure 13-127 Run the database Quota




                                Chapter 13. Using TotalStorage Productivity Center for Data   625
To check if any user has violated the Quota, navigate Data Manager - Databases →
              Alerting → Alert Log → All DBMSs → All. We see one violation as shown in
              Figure 13-128.




              Figure 13-128 DB Quota violation

              We can also now run a database Quota violation report by choosing Data Manager -
              Databases → Reporting → Usage Violations → Quota Violations → All Quotas → By
              User Quota. This report can be seen in Figure 13-129.




              Figure 13-129 Database Quota violation report




626   IBM TotalStorage Productivity Center V2.3: Getting Started
13.11.7 Backup Reporting
          Backup Reporting is designed to do two things: It can alert you to situations where files have
          been modified but not backed up, and it can provide data on the volume of data that will be
          backed up. Figure 13-130 shows the options that are available for Backup Reporting.




          Figure 13-130 Backup Reporting options


          Most at Risk Files
          Data Manager defines most at risk files as those that are least-recently modified, but have not
          been backed up.

          There are some points worth noting about this report:
             Since the report relies on the archive bit being set to determine whether the file has
             changed, this report will only work on Windows systems as UNIX systems have no
             equivalent to the archive bit
             When using most backup products, once a file has been backed up the archive bit is
             cleared. Before Version 5.2, IBM Tivoli Storage Manager did not do this, therefore if this
             level of Tivoli Storage Manager was used, this report would list files that actually may have
             been backed up. IBM Tivoli Storage Manager Version 5.2 has the ability to reset the
             Windows archive bit after a successful backup of a file.




                                          Chapter 13. Using TotalStorage Productivity Center for Data   627
By default, information on only 20 files will be returned. Figure 13-131 shows the selection
              screen for the report. You will notice that the report uses the Profile TPCUser.Most at Risk. It
              is in this Profile that the 20 file limit is set, although the value can be changed. You can
              override the value on the selection screen, but you can only reduce the value here, not
              increase it.

              By updating the Profile you can also exclude files from the report. By default, any file in the
              WINNTsystem* directory tree on any device will be excluded. You can add entries to the
              exclusion list if appropriate. Ideally, the exclusion list should be the same as that in your
              backup product.




              Figure 13-131 Files most at risk report - selection

              Modified Files Not Backed Up
              The report provides an aging analysis of your data that has been modified but not backed up.
              It will show what proportion of the data has been modified within the past 24 hours, between
              one and seven days, between one week and one month, and so on. Figure 13-132 shows the
              selection taken in our Windows environment. Like the Most at Risk Files report, this report
              also relies on the archive bit, so check to see if your backup application uses this.




628   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-132 Modified Files not backed up selection

To view the report, click Generate report. We choose to view it as a graphic by then clicking
the pie icon and selecting Chart: Space Distribution for All. This is shown in Figure 13-133.
This chart tells you the amount of space consumed by files have not been backed up since
the last backup was run for this server.




Figure 13-133 Modified Files not backed up chart overall view




                                  Chapter 13. Using TotalStorage Productivity Center for Data   629
We can also select Chart: Count Distribution for All as shown in Figure 13-134 to show the
              number of files in each category.




              Figure 13-134 Files need backed up chart in detail view

              The different charts can be viewed in different ways. To select another type of chart,
              right-click in the chart area, select Customize this chart, and click the radio button next to
              the desired chart type.

              Backup Storage Requirements Reporting
              This option allows you determine how much data would be backed up if you were to perform
              either a full or an incremental backup. The Full Backup Size option can be used regardless of
              the OS type and the backup application in use.




630   IBM TotalStorage Productivity Center V2.3: Getting Started
In Figure 13-135, the report is run against Windows systems by filesystem.




Figure 13-135 Backup storage requirements per filesystem

The selection can also run by computer, as shown in Figure 13-136.




Figure 13-136 Backup storage requirement per computer and per filesystem




                                 Chapter 13. Using TotalStorage Productivity Center for Data   631
The Incremental Backup Size option makes use of the archive bit, so it can only be used on
              Windows systems, and if Tivoli Storage Manager is the backup application, the
              resetarchiveattribute option must be used (for Version 5.2). A sample report is shown in
              Figure 13-137.




              Figure 13-137 Incremental reporting per Node and Filesystem based on files




632   IBM TotalStorage Productivity Center V2.3: Getting Started
The third report type here is Incremental Range Sizes Reporting. This does not rely on the
archive bit (instead, it uses the modification date) so is more generically applicable. It is
possible to show through the use of this report the actual difference between a traditional
weekly full/daily incremental backup process versus Tivoli Storage Manager’s progressive
incremental approach. To generate this report, select Data Manager → Reporting →
Backup → Backup Storage Requirements → Incremental Range Sizes → By
Computer as shown in Figure 13-138.




Figure 13-138 Incremental Range Size select By Computer




                                 Chapter 13. Using TotalStorage Productivity Center for Data   633
After you select the Computers of interest, click Generate Report. Figure 13-139 shows the
              output from this report, with the amount of data changed for different time ranges. Note that
              the values are cumulative, so for each time range; the values shown include the smaller time
              periods.




              Figure 13-139 Incremental Range Sizes Report



13.12 TotalStorage Productivity Center for Data ESS Reporting
              The reporting capabilities in TotalStorage Productivity Center for Data are expanded in
              Version 1.2 to include Enterprise Storage Subsystem (ESS) reporting. IBM Tivoli Storage
              Resource Manager uses Probe jobs to collect information about the ESS. We can then use
              the reporting facility to view that information. The new subsystem reports show the capacity,
              controllers, disks, and LUNs of an ESS and their relationships to computers and filesystems
              within a network.


13.12.1 ESS Reporting
              For this section we discuss ESS asset and storage subsystem reporting, making references
              to the ESS lab environment in Figure 13-140 below. Note that the host which accesses the
              ESS had a TotalStorage Productivity Center for Data Agent installed. This provides the fullest
              combination of reporting ability for the ESS. If an ESS-attached host does not have a
              TotalStorage Productivity Center for Data Agent installed, items such as filesystem, logical
              volume, and device logical names will not be displayed.




634   IBM TotalStorage Productivity Center V2.3: Getting Started
Win2k Srv sp3
                             CIM/OM server
                               w2kadvtsm
                              172.31.1.135



                                 43p
                             AIX 5.1 ML 4                                  ESSF20
                            ITSRM Agent                                   172.31.1.1
                              tsmsrv43p
                             172.31.1.155       2109




                            Win2k Srv sp3
                            ITSRM Server
                              w2kadvtsrm
                             172.31.1.133




                                                Intranet




Figure 13-140 ESS reporting lab


Prerequisites to ESS Reporting
Before doing ESS reporting with Data Manager, the following conditions are required:
   CIM/OM server successfully installed.
   Data Manager successfully logs into CIM/OM server.
   Data Manager successfully runs a discovery and probes the ESS.

    Important: Refer to Chapter 5, “CIMOM install and configuration” on page 191 and 8.1,
    “Configuring the CIM Agents” on page 290 for additional details on confirming these
    prerequisites.

Data Manager will run a discovery to locate the CIM/OM server in our environment, which in
turn discovers the ESSs. See 8.1.2, “Configuring CIM Agents” on page 290.

Creating the ESS Probe
IBM Tivoli Storage Resource Manager will then run a Probe to query the discovered ESS.
The Probe collects detailed statistics about the storage assets in our enterprise, such as
computers, storage subsystems, disk controllers, hard disks, and filesystems.




                                  Chapter 13. Using TotalStorage Productivity Center for Data   635
Next, we show how to create a Probe for an ESS-F20. Select Probes → Select new probe,
              then under the Computers tab, choose Storage Subsystems. See Figure 13-141.




              Figure 13-141 Creating ESS probe

              On the When to PROBE tab, we selected PROBE Now because we need to populate the
              backend repository. See Figure 13-142.




              Figure 13-142 ESS - When to probe




636   IBM TotalStorage Productivity Center V2.3: Getting Started
Next is the Alert tab, shown in Figure 13-143. This defines the type of notification for a Probe.




Figure 13-143 ESS - Alert tab

After all parameters are defined, save the Probe definition. At this point the Probe is
submitted and will run immediately.


 Note: For additional information on creating Probes, see 13.3.5, “Probes” on page 545.

There are several ways to check the status of the Probe job. First, we can check the color of
the Probe job entry in the navigation tree, then in the content panel. There are two colors that
represent job status. They are:
   GREEN - Job successfully complete with no errors
   RED - Job completed with errors




                                 Chapter 13. Using TotalStorage Productivity Center for Data   637
The status of the Probe job is displayed in text and in color, as shown in Figure 13-144, after
              selecting the Probe job output in the navigation tree. The job at 1:55 pm is in green, indicating
              success.




              Figure 13-144 ESS - probe job status

              We open the Probe job by selecting it and double-clicking the spy glass icon next to the job
              in the content window. We see the contents of the job, including detailed information on the
              status, as in Figure 13-145. Here, we have selected the successful Probe.




              Figure 13-145 Probe job log




638   IBM TotalStorage Productivity Center V2.3: Getting Started
Asset Reports - By Storage Subsystem
With Asset reporting by storage subsystem, you can view the centralized asset repository that
Data Manager constructs during a Probe. The Probe itemizes the information about
computers, disks, controllers, and filesystems, and builds a hardware inventory of assets.
With the backend repository now populated with DS6000 asset information, we will show how
to view reports to display the storage resources.

We choose Data Manager → Reporting → Asset → By Storage Subsystem → Tucson
DS6000. This report provides specific resource information of the DS6000 and allows us to
view storage capacity by a computer, filesystem, storage subsystem, LUN, and disk level. We
can also view the relationships between the components of a storage subsystem. Notice that
the navigation tree is hierarchical. See Figure 13-146.




Figure 13-146 Asset by storage subsystem




                                Chapter 13. Using TotalStorage Productivity Center for Data   639
We drill down to the Disk Groups. The disk group contains information related to the ESS, as
              well as the volume spaces and disks associated with those Disk Groups. Expanding the Disk
              Group node, a list of all Disk Groups on the ESS displays (Figure 13-147).




              Figure 13-147 ESS disk group

              Continuing, we expand the disk group DG1 to view the disks and volume spaces within it.
              We open Volume Space VS3, which shows the disks and LUNs associated with it. The Disks
              subsection shows the individual disks associated with the Volume Space (see Figure 13-148).




640   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-148 Disks in volume spaces

Notice the LUNs subsection for disk DD0105 (Figure 13-149). This shows the LUN to disk
relationship. The LUNs shown here are just a subset of all the LUNs. You can see that the
LUN is spread across all the displayed disks in the content window.




Figure 13-149 Disk and LUN association with volume space




                                Chapter 13. Using TotalStorage Productivity Center for Data   641
Figure 13-150 shows the discovery of a disk with no LUN associations. This is known as a hot
              spare. It can be used when one of the other seven disks in the disk group fails.




              Figure 13-150 Hot spare LUN




642   IBM TotalStorage Productivity Center V2.3: Getting Started
We now show a high level view of all disks in ESSF20. There are 32 disks in the ESS, as
shown in Figure 13-146 on page 639 in the Number of Disks field. Figure 13-151 shows a
partial listing of the disks.




Figure 13-151 ESS all disks




                               Chapter 13. Using TotalStorage Productivity Center for Data   643
We can also display a report of all the LUNs in the ESS. This report provides the physical disk
              association with each LUN. We have a total of 56 LUNs in the ESSF20 as shown in
              Figure 13-146 on page 639 (number of LUNS). A partial listing is shown in Figure 13-152.




              Figure 13-152 ESS all LUNs




644   IBM TotalStorage Productivity Center V2.3: Getting Started
Storage Subsystem Reporting
We now open Reporting → Storage subsystems. Storage Subsystems Reporting allows
viewing storage capacity at a computer, filesystem, storage subsystem, LUN, and disk level.

By Computer
We drill down Computers Views → By Computer. The report displays the association of
filesystems to the storage subsystem, LUNS, and disks on ESSF20. These reports are useful
for relating computers and filesystems to different storage subsystem components. There are
three options available in the Relate Computers to: pull down, as shown in Figure 13-153.




Figure 13-153 By Computer - Relate Computer to

We select Storage Subsystems from the pull down, select the desired computer and click
Generate. Figure 13-154 shows that the generated report TSMSRV43P uses 9.24 GB in the
ESS.




Figure 13-154 By Computer - storage subsystem




                                Chapter 13. Using TotalStorage Productivity Center for Data   645
Returning to the selection screen tab (Figure 13-153 on page 645) we select LUNs. We
              choose the same host, and click Generate. Figure 13-155 shows the generated report; the
              relationship between TSMSRV43P and its assigned LUNs. TSMSRV43P has one LUN
              created on the ESS.




              Figure 13-155 By Computer - LUNs

              Finally, from the Selection tab (Figure 13-153 on page 645), we select Disks, our host
              TSMSRV43P, and click Generate. Figure 13-156 shows the report: the ESS disks assigned to
              the LUN on the host.




              Figure 13-156 By Computer - disk




646   IBM TotalStorage Productivity Center V2.3: Getting Started
By Filesystem/Logical Volume
We will now drill to Computer Views → By Filesystem/Logical Volume. The report
displays the association of filesystems to the storage subsystem, LUNS, and disks on
ESSF20. These reports are useful for relating computers and filesystems to different storage
subsystem components. There are three options available in the Relate Filesystem/Logical
Volumes to pull down, shown in Figure 13-157.




Figure 13-157 By filesystem/logical volume

Select Storage Subsystem, the host (TSMSRV43P), and click Generate. Figure 13-158
shows the filesystems on the host, which are located on the ESS.




Figure 13-158 By filesystem/logical volumes - storage subsystem




                                 Chapter 13. Using TotalStorage Productivity Center for Data   647
From the Selection tab (Figure 13-157 on page 647) we now choose LUNs, the host
              (TSMSRV43P), and click Generate. Figure 13-159 shows the LUN location of each
              filesystem on the host.




              Figure 13-159 By filesystem/logical volume - LUN

              From the Selection tab (Figure 13-157 on page 647) we now choose Disks, the host
              (TSMSRV43P), and click Generate. Figure 13-160 shows which disks are comprising each
              filesystem and logical volume.




              Figure 13-160 By filesystem/logical volume - Disk




648   IBM TotalStorage Productivity Center V2.3: Getting Started
By Storage Subsystem
We will now drill down Storage Subsystem Views → By Storage Subsystem. These
reports display the relationships of the ESS components (storage subsystems, LUNs, and
disks) to the computers and filesystems and logical volumes. There are two options available
in the Relate Storage Subsystems to: the pull down, shown in Figure 13-161.




Figure 13-161 By Storage Subsystems

Select Computers from the pull down, the subsystem ESSF20, and click Generate.
Figure 13-162 shows the space used by each host on the storage subsystem.




Figure 13-162 By Storage subsystem - Computer




                               Chapter 13. Using TotalStorage Productivity Center for Data   649
Now, select Filesystem/logical Volumes from Figure 13-161, the ESSF20 subsystem, and
              click Generate. Figure 13-163 shows each host’s filesystems and logical volumes, with their
              capacity and free space.




              Figure 13-163 By storage subsystem - filesystem/logical volume

              By LUN
              Continuing, we drill down Storage Subsystem Views → By LUNs (Figure 13-164).




              Figure 13-164 By LUNs




650   IBM TotalStorage Productivity Center V2.3: Getting Started
Select Computer from the Relate LUNs to: pull down, select the subsystem (ESSF20) with
the associated disks (default is all), and click Generate Report. Figure 13-165 shows the
LUNs assigned to each host, with the host’s logical name for the LUN (/dev/hdisk1 in this
case).




Figure 13-165 By LUN - computer

Now select Filesystem/Logical Volumes from the Relate LUNS to pull down, the ESSF20
subsystem with associated logical disks (default is all), and click. Next, we clicked Generate
Report. Figure 13-166 shows the relationships between the LUNs, computers, and
filesystems/logical volumes, including free space and host device logical names.




Figure 13-166 By LUNS - filesystem/logical volumes




                                  Chapter 13. Using TotalStorage Productivity Center for Data   651
Disks
              Now we drill to Storage Subsystem Views → Disks. There are two options available in the
              Relate Disks to: pull down, shown in Figure 13-167.




              Figure 13-167 Disks

              Select Computer from the pull down, the ESSF20 subsystem with related disks (default is
              all), and click Generate Report. Figure 13-168 shows the relationships of the disks to the
              hosts.




              Figure 13-168 Disks - computer


652   IBM TotalStorage Productivity Center V2.3: Getting Started
Now select Filesystem/Logical Volumes from the pull down (Figure 13-167 on page 652),
           the ESSF20 subsystem with related disks (default is all), and click Generate Report.
           Figure 13-169 shows the relationship between the ESS disks and the filesystems and logical
           volumes.




           Figure 13-169 Disks - filesystem/logical volumes


            Note: For demonstration purposes, we have reduced some of the fields in the reports.



13.13 IBM Tivoli Storage Resource Manager top 10 reports
           After analyzing typical customer scenarios, we have compiled the following list of “Top 10
           reports” which we recommend running regularly for best practices:
              ESS used and free storage
              ESS attached hosts report
              Computer Uptime
              Growth in storage used and number of files
              Incremental backup trends
              Database reports against DBMS size
              Database Instance storage report
              Database reports size by instance and by computer
              Locate the LUN on which a database is allocated
              Finding important files on your systems


13.13.1 ESS used and free storage
           This report shows the free and used storage on an ESS system. To generate this filesystem
           logical view report, navigate Data Manager → Reporting → Storage Subsystem →
           Computer Views → By Filesystem/Logical Volumes. Select the computers to report on,
           and select Disks from the pull-down Relate Filesystems/Logical Volumes To as in
           Figure 13-170.




                                             Chapter 13. Using TotalStorage Productivity Center for Data   653
Figure 13-170 ESS relation to computer selected by disk

              Click Generate Report. The report is shown in Figure 13-171. Various columns are
              displayed:
                  Storage Subsystem
                  Storage Subsystem Type
                  Manufacturer
                  Model
                  Serial Number
                  Computer
                  Filesystem/Logical Volume Path
                  Capacity
                  Free Space
                  Physical Allocation




              Figure 13-171 Report for Filesystem/Logical Volumes Part 1




654   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-172 shows the right hand columns of the same report.




Figure 13-172 Report for Filesystem/Logical Volumes Part 2

This report provides quick answers to how much space on the ESS is allocated to each
filesystem.

Select LUNs this time from the pull-down in Figure 13-170 on page 654. The report in
Figure 13-173 shows the LUN to host mapping for the ESS, which filesystem is associated
with each LUN, and the free space.




Figure 13-173 Computer view to the filesystem with capacity and free space




                                  Chapter 13. Using TotalStorage Productivity Center for Data   655
13.13.2 ESS attached hosts report
              This report shows which systems are using storage on an ESS. This is useful when ESS
              maintenance is applied so that the administrators of affected systems can be informed.

              To generate this report, select Data Manager → Reporting → Storage Subsystem →
              Computer Views → By Computer tree. We have selected all computers as in
              Figure 13-174.




              Figure 13-174 ESS selection per computer

              Click the Generate Report field - the report is shown in Figure 13-175.




              Figure 13-175 ESS connections to computer report




656   IBM TotalStorage Productivity Center V2.3: Getting Started
Note that you can sort the report on a different column heading by clicking it. The current sort
          field is indicated by the small pointer next to the field name. Clicking again in the same
          column reverses the sort order.


13.13.3 Computer Uptime Reporting
          Uptime is an important IT metric in the enterprise. To generate a Computer Uptime report,
          select Data Manager → Reporting → Availability → Computer Uptime → By
          Computer. Select the computers of interest by clicking the Selection... button and checking
          the boxes next to the desired computers in the Computer Selection window (Figure 13-176)
          and click OK.




          Figure 13-176 Computer Uptime Report - computer selection

          In the Selection window, specify a date range (optional), and click Generate Report, as
          shown in Figure 13-177.




          Figure 13-177 Computer Uptime report selection



                                           Chapter 13. Using TotalStorage Productivity Center for Data   657
For each computer, percent availability, number of reboots, total down time, and average
              downtime is given, as in Figure 13-178 shows the selection. The default sort order is by
              descending Total Down Time.




              Figure 13-178 Computer Uptime report part 1

              You can also display this information graphically, by selecting the pie chart icon at the top of
              the report, as shown in Figure 13-179.




              Figure 13-179 Computer Uptime report graphical combined (stacked bar)




658   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-180 shows an unstacked bar chart of the same information (right-click and select
           Bar Chart).




           Figure 13-180 Computer Uptime report graphical (bar chart)


13.13.4 Growth in storage used and number of files
           The Backup Reporting features of Data Manager also give a convenient way to track the total
           storage used by files in each computer, as well as the number of files stored. It can be
           presented graphically, to show historical numbers and future trends. This information helps
           you plan future storage requirements, be alerted to potential problems, and also (if using a
           traditional full and incremental backup product), plan your backup server storage
           requirements, since this report shows the size of a full backup on each computer.
           Select Data Manager → Reporting → Backup → Backup Storage Requirements →
           Full Backup Size → By Computer. We used the Profile: TPCUser.Summary By Filesystem/
           Directory and selected all computers, as in Figure 13-181. Click Generate Report.




           Figure 13-181 Generate Full Backup Size report


                                            Chapter 13. Using TotalStorage Productivity Center for Data   659
Figure 13-182 shows the total disk space used by all the files, and the number of files on each
              computer. The top column shows the totals for all Agents.




              Figure 13-182 Select History chart for File count

              To drill down, select all the computers (using the Shift key) so they are highlighted, then click
              the pie icon, and select History Chart: Space Usage for Selected. The generated report
              (Figure 13-183), shows how the total full backup size has fluctuated, and is predicted to
              change in the future (dotted lines - to disable this, click Hide Trends).




              Figure 13-183 History Chart: Space Used




660   IBM TotalStorage Productivity Center V2.3: Getting Started
To display the file count graph, select History Chart: File count from the pie icon in
           Figure 13-182. The output report is shown in Figure 13-184, which shows trends in the
           number of files on each computer.




           Figure 13-184 History chart: File Count

           These reports will help you find potential problems (e.g. a computer system that shows an
           unexpected sudden upward or downward spike) and also predicts disk and backup
           requirements for the future.


13.13.5 Incremental backup trends
           This report shows the rate of modification of files, which is very useful for incremental backup
           planning.

           Select Data Manager → Reporting → Backup → Backup Storage Requirements →
           Incremental Range Size → By Filesystem. Select Profile: TPCUser.By Modification as
           shown in Figure 13-185.




                                             Chapter 13. Using TotalStorage Productivity Center for Data   661
Figure 13-185 Incremental Range selection based on filespace

              The generated report shows all the filesystems on the selected computers as in
              Figure 13-186.




              Figure 13-186 Summary of all filespace

              The third column shows the total number and total size of files (for all the systems, then
              broken down by filesystem). Then there are “Last Modified” columns for one day, one week,
              one month, two months, three, six, nine, and one year selections. Each of these gives the
              number and size of the modified files.


662   IBM TotalStorage Productivity Center V2.3: Getting Started
To generate charts, highlight all the systems, and click the pie icon. Select Chart: Count
Distribution for Selected, as shown in Figure 13-187.




Figure 13-187 Selection for Filesystem and computer to generate a graphic

The chart is shown in Figure 13-188. Note that when your cursor passes over a bar, a pop-up
shows the number of files associated with that bar.




Figure 13-188 Bar chart for Incremental Range Size by Filesystem




                                 Chapter 13. Using TotalStorage Productivity Center for Data   663
You can display other filesystems using the Next 2 and Prev 2 buttons. Change the chart
              format by right-clicking and selecting a different layout. Figure 13-189 is a pie chart of the
              same data. The pop-ups work here also.




              Figure 13-189 Pie chart selected with number of files which have modified

              With these reports you can track and forecast your backups. You can also display backup
              behavior for the last one, three, nine, or 12 months.




664   IBM TotalStorage Productivity Center V2.3: Getting Started
13.13.6 Database reports against DBMS size
          This report shows an enterprise wide view of storage usage by all RDBMS. Select Data
          Manager - Databases → Reporting → Capacity → All DBMSs → Total Instance
          Storage → Network-wide and click Generate Report.
          Figure 13-190 shows a sample output.




          Figure 13-190 Total Instance storage used network wide

          This is a quick overview database space consumption across the network. To drill down on a
          particular RDBMS type, select the appropriate magnifying glass icon as in Figure 13-191.




          Figure 13-191 DBMS drill down to the computer reports



                                           Chapter 13. Using TotalStorage Productivity Center for Data   665
The report (Figure 13-192) displays.




              Figure 13-192 DBMS drill down to the computer result

              Figure 13-192 shows the fields for an Oracle database. The fields for a DB2 database are as
              follows:
                  Computer name
                  Total Size
                  Container Capacity
                  Container Free Space
                  Log File Capacity
                  Tablespace Count
                  Container Count
                  Log File Count




666   IBM TotalStorage Productivity Center V2.3: Getting Started
13.13.7 Database instance storage report
           This report shows storage utilization by database instance. Go to Data Manager - Databases
            → Reporting → Capacity → UDB → Total Instance Storage → by Instance, select the
           computer(s) of interest, and click Generate Report. Figure 13-191 shows the result.




           Figure 13-193 DBMS report Total Instance Storage by Instance

           Note you could select any RDBMS which is installed in your network.

           The report shows the following information for each Agent with DB2, plus a total (summary):
              Computer name
              RDBMS instance
              RDBMS type
              Total size
              Container Capacity
              Container free space
              Log file capacity
              Tablespace count
              Container count
              Log file count


13.13.8 Database reports size by instance and by computer
           The next report is based on the previous report (database Instance storage report), but in
           more detail. From the report in Figure 13-193, click the magnifying glass next to a computer of
           interest. Then do a further drill down on the generated report as in Figure 13-194.




                                            Chapter 13. Using TotalStorage Productivity Center for Data   667
Figure 13-194 Instance report RDBMS overview

              Select the computer again, and click the magnifying glass. The report shows the entire DB2
              environment running on computer Colorado. We have 10 DB2 UDB databases, shown in
              Figure 13-195 and Figure 13-196.




              Figure 13-195 Instance running on computer Colorado first part




668   IBM TotalStorage Productivity Center V2.3: Getting Started
Scroll to the right side of the panel (Figure 13-196).




           Figure 13-196 Instance running on computer Colorado second part

           Here we can see which databases are running in ARCHIVELOG mode.


13.13.9 Locate the LUN on which a database is allocated
           This report shows you which disk or LUN is used by a database. Go to Data Manager -
           Databases → Reporting → Capacity → UDB → Total Instance Storage → By
           Instance, select the Agent(s) of interest, then click Generate Report. Figure 13-197 shows
           the result.




           Figure 13-197 LUN report selection for a database




                                            Chapter 13. Using TotalStorage Productivity Center for Data   669
Select an Agent, and click the magnifying glass to drill down. Figure 13-198 displays.

              The report shows the following columns:
                  File Type
                  Path
                  File Size
                  Free Space
                  Auto Extend of an File




              Figure 13-198 Database select File and Path




670   IBM TotalStorage Productivity Center V2.3: Getting Started
Select now a particular data file, and click the magnifying glass. The generated pie chart is
shown in Figure 13-199. We can see this data file is allocated on the C: drive.




Figure 13-199 Report DB2 File in a Pie Chart for DB2 File

Click the View Logical Volume button at the bottom to display the LUN report
(Figure 13-200).




Figure 13-200 LUN information




                                  Chapter 13. Using TotalStorage Productivity Center for Data   671
Using this procedure, we can find the LUNs where all the database data files are stored. This
              information is useful for a variety of purposes, e.g. for performance planning, availability
              planning, and assessing the impact of a LUN failure.


13.13.10 Finding important files on your systems
              This report generates a search for specific files over all computers managed by a Data
              Manager Server.

              As an example, we created a text file each on Lochness and Wisla called lochness.txt and
              wisla.txt, respectively. We have chosen this search for all machines because it will return a
              relatively small number of results; however, any search criteria could be used.

              The task requires a number of steps:
              1.   Define new Profile
              2.   Bind new Profile into a Scan
              3.   Generate a Report with your Profile
              4.   Define new Constraint
              5.   Generate a Report to find defined Constraint
              1. Define the new Profile
                   First create the Profile - Data Manager → Monitoring → Profiles, right-click, and select
                   Create Profile. Fill out the description field accordingly, and check the Summarize space
                   usage by, Accumulate history, and Gather information on the fields as desired. In the
                   bottom half click size distribution of files, as shown in Figure 13-201.




              Figure 13-201 Create Profile for own File search




672   IBM TotalStorage Productivity Center V2.3: Getting Started
Now select the File Filter tab. Click in the All files selected area and right-click to create
   a new condition, as shown in Figure 13-202.




Figure 13-202 Create new Condition

   Enter the desired file pattern into the Match field, and click Add to bring the condition to
   the display window below, as in Figure 13-203. You can select from different conditions:
   –   Matches any of
   –   Matches none of
   –   Matches
   –   Does not match
   When you have finished the condition, click OK. In our case we are matching Tivoli
   Storage Manager option files.




Figure 13-203 Create Condition add


                                 Chapter 13. Using TotalStorage Productivity Center for Data   673
Figure 13-204 shows our newly created Condition.




              Figure 13-204 Saved Condition in new Profile

                  Now save the new Profile with an appropriate name, (in this instance, Search for files).
                  The saved Profile now appears in the Profiles list, see Figure 13-205 on page 675.

                   Tip: We recommend choosing meaningful Profile names, which reflect the content or
                   function of the profile.




674   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-205 Listed Profiles containing Search for files

2. Bind new Profile into a Scan.
   First, create a new Scan - Data Manager → Monitoring → Scans. We chose
   TPCUser.Default Scan as shown in Figure 13-206 on page 675. Fill in a description for
   this Scan and select the Filesystems and Computers on which the Scan will run.




Figure 13-206 Add Profile to Scan


                                    Chapter 13. Using TotalStorage Productivity Center for Data   675
On the Profiles tab, select the newly created Profile and add it to the Profiles to apply to
                  Filesystems column, as shown in Figure 13-207.




              Figure 13-207 Add Profiles to apply to filesystems

                  Now select the schedule time when the schedule should run, save the Scan, then check
                  the result.
              3. Generate Report with your Profile.
                  To view the results, select Data Manager → Reporting → Usage → Files → File Size
                  Distribution → By Filesystem. Select all filesystems you which to report, select the
                  Profile: administrator (Figure 13-208), and click Generate Report. The report contains
                  all the option files discovered by the Scan as in Figure 13-209.




676   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-208 Select Profile: tpcadmin.Search for files




Figure 13-209 Report with number of found Search for files

   Note that on LOCHNESS and WISLA C drive we found 1 file each.




                                   Chapter 13. Using TotalStorage Productivity Center for Data   677
4. Define new Constraint.
                  We would like to know where specifically these files are located. To set up this search,
                  select Data Manager → Policy Management → Constraints → TPCUser.Orphaned
                  File Constraint, as shown in Figure 13-210. Enter a description, and select the
                  Filesystem Groups and Computers where you want to locate the files.




              Figure 13-210 Create Orphaned File search

                  Select the Options tab, then select Edit Filter as shown in Figure 13-211.




              Figure 13-211 Update the Orphaned selection



678   IBM TotalStorage Productivity Center V2.3: Getting Started
On the Edit Filter pop-up, double click the ATTRIBUTES Filter. Here we will replace the
   ORPHANED condition with our own filter, since we want to actually search for the text files
   we created, not orphaned files (Figure 13-212).




Figure 13-212 Update the selection with own data

   Use the Del button to delete the ORPHANED condition, then select NAME from the
   Attributes pull-down, and the Add button to add another Attributes condition. We will
   specify to search for the text files we created, as in Figure 13-213.




Figure 13-213 Enter the file search criteria




                                   Chapter 13. Using TotalStorage Productivity Center for Data   679
After each file pattern entry, click Add to save it. When all search arguments are entered,
                  click OK to save the search. The selection is now complete as in Figure 13-214.




              Figure 13-214 File Filter selection reconfirm

                  Click OK again. Save the search with a new description and name (File → Save As), so
                  that you do not overwrite the original TPCUser.Orphaned File Constraint. We called saved
                  the file as “File search.”
                  Finally, we run the Scan and check the Scan job log for correct execution, as shown in
                  Figure 13-215.




              Figure 13-215 Scan log check




680   IBM TotalStorage Productivity Center V2.3: Getting Started
5. Generate Report to find defined Constraint
   Now look for the results of the file name search. Select Data Manager → Reporting →
   Usage Violations → Constraint Violations → By Computer, select all computers and
   generate the report. The report will present a summary as in Figure 13-216.




Figure 13-216 Summary report of all Tivoli Storage Manager option files

   To drill down, click the magnifying glass on WISLA as in Figure 13-217. This shows all the
   filesystems on WISLA where matching files were found.




Figure 13-217 File selection for computer WISLA


                                  Chapter 13. Using TotalStorage Productivity Center for Data   681
Click the magnifying class on a filesystem (C drive, in this case). This will show all the files
                  found which matched the pattern, as in Figure 13-218. Note that there was 1 file reported,
                  which matches the summary view given in Figure 13-209 on page 677.




              Figure 13-218 Report for Tivoli Storage Manager Option file searched

                  You can also drill down to individual files, for detailed information as in Figure 13-219.




              Figure 13-219 File detail information




682   IBM TotalStorage Productivity Center V2.3: Getting Started
13.14 Creating customized reports
          Customized Reporting within Data Manager is done through the My Reports option, which is
          available for both Data Manager and Data Manager for Databases.

          There are three main options available within My Reports:
             System Reports
             Reports owned by username
             Batch Reports

          System Reports, while included here in the customized reporting section, is in fact not
          customizable currently. We will still discuss it in this section as it is part of the My Reports
          group.

          Reports owned by username’s Reports, where username is the currently logged in Data
          Manager username, are modified versions of standard reports from the Reporting option. You
          will only see reports here that you have modified and saved.

          Batch Reports are reports that are typically set up to run on a schedule, although they can be
          run interactively. The key difference between Batch Reports and other reporting options is
          that with Batch Reports, the output will always be written to an output file rather than
          displayed on the screen.


13.14.1 System Reports
          These reports can, at this point in time at least, only be run as is. You cannot modify the
          parameters in any way, nor can you add additional reports to the list.

          These reports provide the same information than is available from running reports from the
          Reporting option. The intent of these reports is to provide frequently needed information,
          which can be provided quickly and repetitively without having to reenter parameters.




                                            Chapter 13. Using TotalStorage Productivity Center for Data      683
Data Manager
              Figure 13-220 shows the available System Reports for Data Manager.




              Figure 13-220 My Reports - System Reports

              Figure 13-221 shows the output from running the Storage Capacity system report. We could
              have generated exactly the same output by selecting Data Manager → Reporting →
              Capacity → Disk Capacity → By Computer → Generate Report. Obviously, selecting
              Data Manager → My Reports → Storage Capacity is a lot simpler.




              Figure 13-221 My Reports - Storage Capacity



684   IBM TotalStorage Productivity Center V2.3: Getting Started
Data Manager for Databases
The System Reports available for Data Manager for Databases are shown in Figure 13-222.
While there are quite a few reports available, they fall into three main categories:
   Database storage by database
   Database storage by user
   Database freespace

The only report that does not fall into one of those categories is a usage violation report.

Figure 13-222 shows the output from the All Dbms - User Database Space Usage report. We
are not so much interested in the report contents as such here, but rather in the fact that when
the report was run it produced a report for all users. You can go back to the selection tab and
select specific users if required. This capability exists for all of the System Reports.




Figure 13-222 Available System Reports for databases




                                 Chapter 13. Using TotalStorage Productivity Center for Data   685
13.14.2 Reports owned by a specific username
              In concept this option is very similar to System Reports. You can include here those reports
              that you need to run regularly, consistently and easily. The difference, compared to System
              Reports, is that you get to decide what reports are included and what they look like.

              However, it is important to remember that you will only see those reports that have been
              created by the currently logged in TotalStorage Productivity Center for Data username.

              Data Manager
              We will define a report here for tpcadmin, the username that we are currently logged in as.

              We will create a report that is exactly the same as the Storage Capacity system report as
              shown in Figure 13-221 on page 684. In practice this is not something you would normally do
              as a report already exists. However, this will demonstrate more clearly how the options relate
              to each other.

              We select Data Manager → Reporting → Capacity → Disk Capacity → By Computer
              and click Generate Report. Once the report is produced, we save the report definition, using
              the name My Storage Capacity. This is shown in Figure 13-223.




              Figure 13-223 Create My Storage Capacity report




686   IBM TotalStorage Productivity Center V2.3: Getting Started
Once the report is saved you will see it available under username’s Reports for tpcadmin as
shown in Figure 13-224.

There are a few features of saved reports worth mentioning here. Firstly, characteristics such
as sort order are not saved with the report definition; however, selection criteria are saved.
Secondly, you can override the selection criteria when running your report. By default the
objects selected at the time of the save only will be reported. However, you can use the
Selection tab when running the saved report to include or exclude objects from the report. If
you change selection criteria you can resave the report, or save it under another name to
update the definition or create a new definition respectively.




Figure 13-224 My Storage Report saved




                                Chapter 13. Using TotalStorage Productivity Center for Data   687
Data Manager for Databases
              Database Reports created for specific users, in this case tpcadmin, are set up the same as in
              Data Manager.

              We will show one brief example here. We will take one of the reports that we created earlier in
              our discussion on Reporting (in this case Figure 13-100 on page 609) the Monitored Tables
              by RDBMS Type report and set it up to be able to run more easily.
              First we run the report by choosing Data Manager - Databases → Reporting → Usage →
              All DBMSs → Tables → Monitored Tables → By RDBMS Type and click Generate
              Report. We then saved the report definition, naming it Monitored Tables by RDBMS Type.
              This is shown in Figure 13-225.




              Figure 13-225 Monitored Tables by RDBMS Types customized report

              The report is more easily run now by choosing IBM Tivoli SRM for Databases → My
              Reports → username’s Reports → Monitored Tables by RDBMS Type.


13.14.3 Batch Reports
              In this section we will show how we set up some Batch Reports. All of the reports were set up
              in the same way so we will use only one as an example. The process is the same whether the
              report is for Data Manager or Data Manager for Databases.




688   IBM TotalStorage Productivity Center V2.3: Getting Started
Data Manager
To set up a new report Data Manager → My Reports → Batch Reports right-click Batch
Reports and select Create Batch Report. You will then see the screen shown in
Figure 13-226.




Figure 13-226 Create a Batch Report

Now, it is a simply a matter of specifying what has to be reported, plus when and what the
output should be. In this case we are going to create a system uptime report. As shown in
Figure 13-227, we entered our report description of System Uptime and have then selected
Availability → Computer Uptime → By Computer and clicked >>. Our selection is then
moved into the right hand panel, Current Selections.




Figure 13-227 Create a Batch Report - report selection




                                  Chapter 13. Using TotalStorage Productivity Center for Data   689
We then selected the Selection tab, which is shown in Figure 13-228. Here we are able to
              select a subset of available data by either reporting for a specified time range or a subset of
              available systems. We took the defaults here.




              Figure 13-228 Create a Batch Report - selection

              On the Options tab, we specified that the report should be executed and generated on the
              Agent called COLORADO, which is our Data Manager server. We selected HTML for Report
              Type Specification and then changed the rules for the naming of the output file under Output
              File Specification.

              By default the name will be {Report creator}.{Report name}.{Report run number}. In this
              case we do not really care who created the report and having a variable like report run
              number, which changes every time a new version of the report is created and makes it
              difficult to access the file from a static Web page. So we changed the report name to be
              {Report name}.html.

              The report will be created in <install-directory>logData-agent-namereports on the Agent
              system where the report job is executed. There is no ability to override the directory name.
              For example, C:Program FilestivoliepsubagentsTPCDatalogcoloradoreports on our
              Windows 2000 Data Manager server COLORADO or /usr/tivoli/tsrm/log/brazil/reports on an
              AIX Data Manager Agent called BRAZIL.

              The Option tab is shown in Figure 13-229.

              Note here that it possible to run a script after the report is created to perform some type of
              post-processing. For example, you might need to copy the output file to another system if
              your Web server is on a system that is not running a Data Manager Agent.




690   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-229 Create a Batch Report - options

On the When to REPORT tab we specified when the report should be generated. We chose
REPORT Repeatedly and then selected a time early in the morning (3:00 AM) and specified
that the report should be generated every day. This is shown in Figure 13-230.




Figure 13-230 Create a Batch Report - When to REPORT



                                 Chapter 13. Using TotalStorage Productivity Center for Data   691
We left the Alert tab options as default, but it is possible to generate an Alert through several
              mechanisms including e-mail, an SNMP trap, or the Windows event log should the generation
              of the report fail.

              Finally, we saved the report, calling it System Uptime, as shown in Figure 13-231.




              Figure 13-231 Create a Batch Report - saving the report




692   IBM TotalStorage Productivity Center V2.3: Getting Started
Data Manager for Databases
We will use the same example here as we used in Figure 13.14.2 on page 686, that is a
Monitored Tables by RDBMS Type, but here we will save it in HTML format.

We choose Data Manager - Databases → My Reports → Batch Reports, right-click
Batch Reports and select Create Batch Report as shown in Figure 13-232.




Figure 13-232 Create a database Batch Report

Figure 13-233 shows the Report tab. We expanded in turn Usage → All DBMS’s → Tables
→ Monitored Tables → By RDBMS Type and clicked >>. We also entered a Description of
Monitored Tables by RDBMS Type.




                                Chapter 13. Using TotalStorage Productivity Center for Data   693
Figure 13-233 Create a database Batch Report - Report tab

              We accepted the defaults on the Selection tab, which is to report on all RDBMS types and
              then went to the Options tab, shown in Figure 13-234. We set the Agent computer, which will
              run the report to COLORADO.

              Note that the system that you run the report on must be licensed for each type of database
              that you are reporting on. If we were to run the report on COLORADO, the Data Manager
              server system, we would need to have the Data Manager for Databases licences for Oracle
              and SQL-Server licences loaded there even though COLORADO does not run these
              databases.

              We also set the report type to HTML and changed the output file name to be {Report
              name}.html. This is shown in Figure 13-234.




694   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-234 Create a database Batch Report - Options tab

On the When to Report tab, shown in Figure 13-235, we chose REPORT Repeatedly and
set a start time.




Figure 13-235 Create a database Batch Report - When to Report tab

We did not change anything in the Alert tab. We saved the definition with the name Monitored
Tables by RDBMS Type as shown in Figure 13-236.




                                 Chapter 13. Using TotalStorage Productivity Center for Data   695
Figure 13-236 Create a database Batch Report - save definition

              We can now run the report by choosing Data Manager - Databases → My Reports →
              Batch Reports and then right-clicking tpcadmin.Monitored Tables by RDBMS Type and
              choosing Run Now.

              Figure 13-237 shows the output from the report execution.




              Figure 13-237 Monitored Tables by RDBMS Type batch report output




696   IBM TotalStorage Productivity Center V2.3: Getting Started
13.15 Setting up a schedule for daily reports
         Data Manager can produce reports according to a schedule. In our lab environment, we set
         up a number of Batch Reports as shown in Figure 13-238. Note that the name of each of the
         reports is prefixed by tpcadmin. This is the Windows username that we used to log into Data
         Manager. Even though the reports were created by a particular user, other Data Manager
         administrative users still have access to the reports (Data Manager non-administrative users
         can only look at the results).

         It is possible to generate output from Batch Reports in various formats including HTML,CSV,
         (comma separated values) and formatted reports. For all of the reports that we set up, we
         specified HTML as the output type, and also set them to run on a daily schedule. That way it
         is easy to use a browser to quickly look at the state of the organization’s storage. It also
         means that anyone can look at the reported data through their browser, without having
         access to, or indeed, knowing how to use Data Manager. Obviously, if unrestricted access to
         this data was not desirable some sort of password based security could be included within the
         Web page.

         Currently, all of the HTML output from Batch Reports is in table format - graphs cannot be
         produced. There is also no ability to affect the layout of the reports in terms of sort order,
         nominating the columns to be displayed or the column size. Using the interactive reporting
         capability of the product does allow graphs to be produced and gives you some additional
         capability in determining what the output looks like. To go further than that you can export to a
         CSV file, and then use a tool such as Lotus 1-2-3® or Microsoft Excel to manipulate the
         output.




         Figure 13-238 Batch Reports listing


         The next section shows how to develop the Web site.




                                          Chapter 13. Using TotalStorage Productivity Center for Data   697
13.16 Setting up a reports Web site
              Since Data Manager can easily generate reports in HTML format, it is a logical extension to
              set up a Web site where the reports can be easily viewed.

              Since Data Manager itself is easy to install and use, we likewise took a fairly simplistic view to
              creating the Web site. We used the Microsoft Word Web Page Wizard to create the basic
              layout of the page as shown in Figure 13-239.

              The main page has two frames. In the left hand frame we have created links to each of the
              report files. The right hand frame is where the reports are displayed.

              As additional Batch Reports are needed, it is a relatively simple process of editing the HTML
              source and including another hot link.

              Obviously, this could be made more sophisticated. An example would be to have the browser
              list all HTML files within the report directory.




              Figure 13-239 MS Word created Web page

              We then used the Virtual Directory Creation Wizard within Microsoft Internet Information
              Server (IIS) to set up access to the reports as shown in Figure 13-240. Detailed information
              on using IIS is shown in 8.2.2, “Using Internet Information Server” on page 299.




698   IBM TotalStorage Productivity Center V2.3: Getting Started
Figure 13-240 Setting up a Virtual Directory within IIS

We could then access the reports through a Web browser as shown in Figure 13-241.




Figure 13-241 Reports available from a Web browser




                                   Chapter 13. Using TotalStorage Productivity Center for Data   699
13.17 Charging for storage usage
              Through the Data Manager for Chargeback product, Data Manager provides the ability to
              produce Chargeback information for storage usage. The following items can have charges
              allocated against them:
                  Operating system storage by user
                  Operating system disk capacity by computer
                  Storage usage by database user
                  Total size by database-tablespace

              For each of the Chargeback by user options, a Profile needs to be specified. Profiles are
              covered in “Probes” on page 527.

              Data Manager can directly produce an invoice or create a file in CIMS format. CIMS is a set of
              resource accounting tools that allow you to track, manage, allocate, and charge for IT
              resources and costs. For more information on CIMS see: http://www.cims.com.

              Figure 13-242 shows the Parameter Definition screen. The costs allocated here do not
              represent any real environment, but represent an example, based on these assumptions:
                  Disk hardware costs, including controllers and switches. is $0.50 per MB
                  Hardware costs are only 20% of the total cost over the life of the storage = $2.50 /MB
                  On average only 50% of the capacity is used = $5.00 /MB used
                  The expected life of the storage is 4 years - $5.00 /48 = 0.1042 /MB /month
                  The figures used are for monthly Chargeback
                  Chargeback is for cost recovery only, no profit




              Figure 13-242 Chargeback parameter definition

              In this example we have chosen to perform Chargeback by computer. It is possible to
              separately charge for database usage and use a different rate from the computer rate. To do
              this you would need to set up a Profile that excluded the database data, otherwise, it would
              be counted twice.


700   IBM TotalStorage Productivity Center V2.3: Getting Started
Chargeback is useful, even if you do not actually collect revenue from your users for the
resources consumes. It is a very powerful tool for raising the awareness within the
organization of the cost of storage, and the need to have the appropriate tools and processes
in place to manage storage effectively and efficiently.

Figure 13-243 shows the Chargeback Report being created. Currently, it is not possible to
have the Chargeback Report created automatically (that is, scheduled).




Figure 13-243 Create the Chargeback Report

Example 13-5 shows the Chargeback Report that was produced.

Example 13-5 Chargeback Report
Data Manager - Chargeback                                                     page 1
Computer Disk Space Invoice                                             Aug 23, 2005


tpcadmin.Linux Systems

  NAME                                                         SPACE             COST
                                                                  GB         0.104/GB

  klchl5h                                                           0            0.00

  group total                                                       0            0.00

Data Manager - Chargeback                                                     page 2
Computer Disk Space Invoice                                             Aug 23, 2005


tpcadmin.Windows DB Systems

  NAME                                                         SPACE             COST
                                                                  GB         0.104/GB

  colorado                                                         69            7.19
  senegal                                                           0            0.00

  group total                                                      69            7.19

Data Manager - Chargeback                                                     page 3
Computer Disk Space Invoice                                             Aug 23, 2005




                                Chapter 13. Using TotalStorage Productivity Center for Data   701
tpcadmin.Windows Systems

                 NAME                                                     SPACE           COST
                                                                             GB       0.104/GB

                 gallium                                                    59           6.15
                 lochness                                                   75           7.82
                 wisla                                                      75           7.82

                 group total                                                209         21.79

              Data Manager - Chargeback                                                 page 4
              Computer Disk Space Invoice                                         Aug 23, 2005


              TPCUser.Default Computer Group

                 NAME                                                     SPACE           COST
                                                                             GB       0.104/GB

                 Cluster Group.DB2CLUSTER.ITSOSJNT                          137         14.28

                 group total                                                137         14.28

              Data Manager - Chargeback                                                 page 5
              Run Summary                                                         Aug 23, 2005


                 Computer Disk Space Invoice                            415 GB          43.26

                 run total                                                              43.26



              Example 13-6 shows the Chargeback Report in CIMS format.

              Example 13-6 Chargeback Report in CIMS format
              TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Linux Systems,klchl5h,1,0
              TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows DB
              Systems,colorado,1,71687000
              TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows DB Systems,senegal,1,0
              TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows Systems,gallium,1,61762720
              TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows Systems,lochness,1,78156288
              TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows Systems,wisla,1,78156288
              TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,TPCUser,Default Computer Group,Cluster
              Group.DB2CLUSTER.ITSOSJNT,1,142849536




702   IBM TotalStorage Productivity Center V2.3: Getting Started
14


   Chapter 14.   Using TotalStorage Productivity
                 Center for Fabric
                 In this chapter we provide an introduction to the features of TotalStorage Productivity Center
                 for Fabric. We discuss the following topics:
                     IBM Tivoli NetView navigation overview
                     Topology view
                     Data collection, reporting, and SmartSets




© Copyright IBM Corp. 2005. All rights reserved.                                                            703
14.1 NetView navigation overview
               Since TotalStorage Productivity enter for Fabric (formerly Tivoli SAN Manager) uses IBM
               Tivoli NetView (abbreviated as NetView) for display, before going into further details, we give
               you a basic overview of the NetView interface, how to navigate in it and how TotalStorage
               Productivity Center for Fabric integrates with NetView. Detailed information on NetView is in
               the redbook Tivoli NetView V6.01 and Friends, SG24-6019.


14.1.1 NetView interface
               NetView uses a graphical interface to display a map of the IP network with all the components
               and interconnect elements that are discovered in the IP network. As your Storage Area
               network (SAN) is a network, TotalStorage Productivity Center for Fabric uses NetView and its
               graphical interface to display a mapping of the discovered storage network.


14.1.2 Maps and submaps
               NetView uses maps and submaps to navigate in your network and to display deeper details
               as you drill down. The main map is called the root map while each dependent map is called a
               submap. Your SAN topology will be displayed in the Storage Area Network submap and its
               dependents. You can navigate from one map to its submap simply by double-clicking the
               element you want to display.


14.1.3 NetView window structure
               Figure 14-1 shows a basic NetView window.




                                                      submap window




                  submap
                  stack




                                                  child submap area




               Figure 14-1 NetView window




704   IBM TotalStorage Productivity Center: Getting Started
The NetView window is divided into three parts:
              The submap window displays the elements included in the current view. Each element can
              be another submap or a device
              The submap stack is located on the left side of the submap window. This area displays a
              stack of icons representing the parent submaps that you have already displayed. It shows
              the hierarchy of submaps you have opened for a particular map. This navigation bar can
              be used to go back to a higher level with one click
              The child submap area is located at the bottom of the submap window. This submap area
              shows the submaps that you have previously opened from the current submap. You can
              open a submap from this area, or bring it into view if it is already opened in another
              window on the window.


14.1.4 NetView Explorer
           From the NetView map based window, you can switch to an Explorer view where all maps,
           submaps and objects are displayed in a tree scheme (similar to the Microsoft Windows
           Explorer interface). To switch to this view, right-click a submap icon and select Explore as
           shown in Figure 14-2.




           Figure 14-2 NetView Explorer option




                                          Chapter 14. Using TotalStorage Productivity Center for Fabric   705
Figure 14-3 shows the new display using the NetView Explorer.




               Figure 14-3 NetView explorer window

               From here, you can change the information displayed on the right pane by changing to the
               Tivoli Storage Area Network Manager view on the top pull-down field. The previously
               displayed view was System Configuration view. The new display is shown in Figure 14-4.




               Figure 14-4 NetView explorer window with Tivoli Storage Area Network Manager view




706   IBM TotalStorage Productivity Center: Getting Started
Now, the right pane shows Label, Name, Type and Status for the device. You may scroll right
           to see additional fields.


14.1.5 NetView Navigation Tree
           From the any NetView window, you can switch to the Navigation Tree by clicking the tree icon
           circled on Figure 14-5.




           Figure 14-5 NetView toolbar

           NetView will display, with a tree format, all the objects contained in the maps you have
           already explored. Figure 14-6 shows the tree view.




           Figure 14-6 NetView tree map

           You can see that our SAN — circled in red — does not show its dependent objects since we
           have not yet opened this map through the standard NetView navigation window.

           You can click any object and it will open its submap in the standard NetView view.


14.1.6 Object selection and NetView properties
           To select an object, right-click it. NetView displays a context-sensitive menu with several
           options including Object Properties as shown in Figure 14-7.




                                          Chapter 14. Using TotalStorage Productivity Center for Fabric   707
Figure 14-7 NetView objects properties menu

               The Object Properties for that device will display (Figure 14-8). This will allow you to change
               NetView properties such as the label and icon type of the selected object.




               Figure 14-8 NetView objects properties


                Important: As TotalStorage Productivity Center for Fabric runs its own polling and
                discovery processes and only uses NetView to display the discovered objects, each
                change to the NetView object properties will be lost as soon as TotalStorage Productivity
                Center for Fabric regenerates a new map.




708   IBM TotalStorage Productivity Center: Getting Started
14.1.7 Object symbols
                 TotalStorage Productivity Center for Fabric uses its own set of icons as shown in Figure 14-9.
                 Two new icons have been added for Version 1.2 - ESS and SAN Volume Controller.




                 Figure 14-9 Productivity Center for Fabric icons


14.1.8 Object status
                 The color of a symbol or the connection represents its status. The colors used by Productivity
                 Center for Fabric and their corresponding status are shown in Table 14-1.

Table 14-1 Productivity Center for Fabric symbols color meaning
 Symbol color         Connection color      Status                    Status meaning

 Green                Black                 Normal                    The device was detected in at least one of the
                                                                      scans

 Green                Black                 New                       The device was detected in at least one of the
                                                                      scans and a new discovery has not yet been
                                                                      performed since the device was detected

 Yellow               Yellow                Marginal (suspect)        Device detected - the status is impaired but still
                                                                      functional

 Red                  Red                   Missing                   None of the scans that previously detected the
                                                                      device are now reporting it

                 IBM Tivoli NetView uses additional colors to show the specific status of the devices, however
                 these are not used in the same way by Productivity Center for Fabric (Table 14-2).

                 Table 14-2 IBM Tivoli NetView additional colors
                  Symbol color             Status                     Status Meaning

                  Blue                     Unknown                    Status not determined

                  Wheat (tan)              Unmanaged                  The device is no longer monitored for topology
                                                                      and status changes.

                  Dark green               Acknowledged               The device was Missing, Suspect or Unknown.
                                                                      The problem has been recognized and is being
                                                                      resolved

                  Gray (used in NetView    Unknown                    Status not determined
                  Explorer left pane)

                 If you suspect problems in your SAN, look in the topology displays for icons indicating a
                 status of other than normal/green. To assist in problem determination, Table 14-3 provides an
                 overview of symbol status with possible explanations of the problem.


                                                    Chapter 14. Using TotalStorage Productivity Center for Fabric      709
Table 14-3 Problem determination
                Display             Agents     Device     Link       Non-ISL                 ISL explanation
                                                                     explanation

                                    Any        Normal     Marginal   One or more, but        One or more, but
                                               (green)    (yellow)   not all links to the    not all links
                                                                     device in this          between the two
                                                                     topology are            switches is missing
                                                                     missing.

                                    Any        Normal     Critical   All links to the        All links between
                                               (green)    (red)      device in this          the two switches
                                                                     topology are            are missing, but
                                                                     missing, while other    the out-of-band
                                                                     links to this device    communication to
                                                                     in other topologies     the switch is
                                                                     are normal.             normal

                                    Any        Critical   Critical   All links to the        All links between
                                               (red)      (red)      device in this          the two switches
                                                                     topology are            are missing, and
                                                                     missing, while all      the out-of-band
                                                                     other links to          communication to
                                                                     devices in other        the switch is
                                                                     topologies are          missing or
                                                                     missing (if any)        indicates that the
                                                                                             switch is in critical
                                                                                             condition

                                    Both       Critical   Normal     All in-band agents      This condition
                                               (red)      (black)    monitoring the          should not happen.
                                                                     device can no           If you see this on
                                                                     longer detect the       an ISL where
                                                                     device. For             switches on either
                                                                     example, a server       side of the link
                                                                     reboot, power-off,      have an
                                                                     shutdown of agent       out-of-band agent
                                                                     service, Ethernet       connected to your
                                                                     problems, and           SAN Manager,
                                                                     soon.                   then you are
                                                                                             having problems
                                                                                             with your
                                                                                             out-of-band agent.

                                    Both       Critical   Marginal   At least one link to    This condition
                                               (red)      (yellow)   the device in this      should not happen.
                                                                     topology is normal      If you see this on
                                                                     and one or more         an ISL where
                                                                     links are missing. In   switches on either
                                                                     addition, all in-band   side of the link
                                                                     agents monitoring       have an
                                                                     the device can no       out-of-band agent
                                                                     longer detect the       connected to your
                                                                     device                  SAN Manager,
                                                                                             then you are
                                                                                             having problems
                                                                                             with your
                                                                                             out-of-band agent.




710   IBM TotalStorage Productivity Center: Getting Started
14.1.9 Status propagation
           Each object has a color representing its status. If the object is an individual device, the status
           shown is that of the device. If the object is a submap, the status shown reflects the summary
           status of all objects in its child submap. Status of lower level objects is propagated to the
           higher submap as shown in Table 14-4.

           Table 14-4 Status propagation rules
            Object status             Symbols in the child submap

            Unknown                   No symbols with status of normal, critical, suspect or unmanaged

            Normal                    All symbols are normal or acknowledged

            Suspect (marginal)        All symbols are suspect or
                                      Normal and suspect symbols or
                                      Normal, suspect and critical symbols

            Critical                  At least one symbol is critical and no symbol are normal


14.1.10 NetView and Productivity Center for Fabric integration
           Productivity Center for Fabric adds a SAN menu entry in the IBM Tivoli NetView interface,
           shown in Figure 14-10. The SAN pull-down menu contains the following entries:
              SAN Properties to display and change object properties, such as object label and icon
              Launch Application to run a management application
              ED/FI Properties to view ED/FI events
              ED/FI Configuration to start, stop, and configure ED/FI
              Configure Agents to add and remove agents
              Configure Manager to configure the polling and discovery scheduling
              Set Event Destination to configure SNMP and TEC events recipients
              Storage Resource Manager to launch TotalStorage Productivity Center for Data
              Help




           Figure 14-10 SAN Properties menu


                                           Chapter 14. Using TotalStorage Productivity Center for Fabric   711
All those items will subsequently be described in more detail.



14.2 Walk-through of Productivity Center for Fabric
               This section takes you through the Productivity Center for Fabric. It steps through different
               views to help you understand how to use different panels.

               To anyone familiar with NetView, this is similar. Productivity Center for Fabric uses NetView
               to display your SAN along with your IP network. In the first view, you see three icons: IP
               Internet, SmartSets, and SAN. We focus on the SAN icon.

               Figure 14-11 shows the root display window when you first launch NetView. The green
               background on the SAN icon indicates that all is well in that environment.




               Figure 14-11 NetView root display




712   IBM TotalStorage Productivity Center: Getting Started
There are three different types of views in Productivity Center for Fabric: Device Centric view,
           Host Centric view, and SAN view. In our configuration, the NetView display (Figure 14-12)
           shows two separate SANs that we are monitoring: TPC SAN and TSM SAN.




           Figure 14-12 SAN view


14.2.1 Device Centric view
           The first view is the Device Centric view. From this view, you can drill down to see the device
           point of view. In this example, we have a view of two IBM FAStT devices. The one we are
           using is labeled FAStT-1T14859668. We drill down on that device to see which systems are
           using LUNs from the FAStT.

           Here we see that two LUNs are available. As we drill down on LUN1, we see that a host
           named PDQDISRV has been assigned that LUN. If we go further, we can see that this
           system is a Windows 2000 system.




                                          Chapter 14. Using TotalStorage Productivity Center for Fabric   713
14.2.2 Host Centric view
               Now we investigate the Host Centric view.

                Note: Only the systems that have the Productivity Center for Fabric agent installed on
                them are displayed in this view.

               The Host Centric view displays all host systems and their logical relationships to local and
               SAN-attached devices. Here again we see a system called PQDISRV. If we drill down on this
               system, we can see that this is a Windows 2000 system that has four file systems defined on
               it. We can also look at the properties of those file systems. This enables us to see such
               information as the type of file system, mount point, total amount of space and how much free
               space is available. As we drill down further, we can see the logical volume or volumes behind
               those file systems.


14.2.3 SAN view
               The SAN view displays one symbol for each SAN. You can see from Figure 14-12 on
               page 713 that there are two SANs. When we double-click the SAN icon labelled TPC SAN,
               we see the underlying submap (Figure 14-13). From the submap, you can choose either the
               Topology View or Zone View.




               Figure 14-13 SAN subview




714   IBM TotalStorage Productivity Center: Getting Started
First we explore the Zone View. The Zone View displays information by zone groupings.
Figure 14-14 displays the information about the three zones that have been setup on the
Fibre Channel switch: the Colorado, Gallium and PQDI zones.




Figure 14-14 Zone View




                              Chapter 14. Using TotalStorage Productivity Center for Fabric   715
We can drill down in each zone and see which system and devices have been assigned to
               that specific zone. Figure 14-15 shows the Colorado zone in which there is one host and a
               FAStT disk subsystem.




               Figure 14-15 Colorado zone contents




716   IBM TotalStorage Productivity Center: Getting Started
Now we look at the Topology View (Figure 14-16). The Topology View draws a picture of how
the SAN is configured, which devices are connected to which ports, and so on. As we drill
down in the Topology View, we first see the interconnect elements. This shows you the
connection between any switches. In our small environment, we have only one switch, so the
only device connected is the itsosw4 switch, which is an IBM 2109-F16 switch.




Figure 14-16 Topology View of switches




                               Chapter 14. Using TotalStorage Productivity Center for Fabric   717
If we had two switches in our SAN, we would see a switch icon on either side of the
               Interconnect elements icon. As we drill down on the switch, we see what devices and
               systems are directly attached to it. Figure 14-17 shows five hosts, the FAStT device, and the
               IBM switch in the middle.




               Figure 14-17 SAN topology

               From here, we show you several features of Productivity Center for Fabric such as:
                  How to configure the manager and what happens when things go wrong
                  Properties of a host with the Productivity Center for Fabric agent installed
                  How to configure SNMP agents




718   IBM TotalStorage Productivity Center: Getting Started
We begin by showing what happens when things go wrong. Figure 14-18 shows that the
FAStT disk system has a redundant connection. Let’s see what happens when one
connection goes down.




Figure 14-18 FAStT dual connections




                              Chapter 14. Using TotalStorage Productivity Center for Fabric   719
In Figure 14-19 you notice on the left, that all of the parent icons have turned yellow. This
               indicates that something has happened in your SAN environment. You can then drill down,
               following the yellow trail until you find the problem. Here we can see that one of the
               connections to the FAStT disk system has gone down.




               Figure 14-19 Failed resource

               This gives an administrator a place to start looking. After they determine what the problem is,
               they can take corrective action. The FAStT icon has turned Red, not because it has failed, but
               so you can see that it is affected. In our case, we lost access to one of the controllers of the
               FAStT, because it was the only path to that controller. If you right-click the FAStT icon and
               then select Acknowledge, it changes back to Green if the device itself is OK. The path to the
               icon still remains Yellow. When the problem is corrected, the topology is updated to reflect the
               resolution.




720   IBM TotalStorage Productivity Center: Getting Started
Now let us see the kind of information that we can view from a host that has a Productivity
Center for Fabric agent installed on it. You select the required host, and then click
SAN →SAN Properties.

A Properties window (Figure 14-20) opens. It shows such information as IP address,
operating system, host bus adapter type driver versions, and firmware levels.




Figure 14-20 Properties of host GALLIUM




                               Chapter 14. Using TotalStorage Productivity Center for Fabric   721
When you click the Connection tab on the left, you see the port on the switch to which the
               specific host is connected as shown in Figure 14-21.




               Figure 14-21 GALLIUM connection

               Now that you have seen the agents and where you can define them, let’s look at the manager
               configuration. The manager configuration is simple and enables you set the polling intervals.
               Figure 9-46 on page 349 shows the polling setup, in which you specify how often you want
               your agents to poll the SAN. You can set this to minutes, hours, days, weeks, which days you
               want to poll on, and the exact time.

               Or you can poll manually by clicking the Poll Now button. The Clear History button changes
               the state of an object that previously had a problem but is back up. The state appears as
               yellow, but the Clear History button changes it back to normal or green.




722   IBM TotalStorage Productivity Center: Getting Started
14.2.4 Launching element managers
          Productivity Center for Fabric also has the ability to launch element managers. By element
          manager, we are referring to applications that vendors use to configure their hardware.
          Figure 14-22 shows the Productivity Center for Fabric launching the element manager for the
          IBM 2109 Fibre Channel switch.




          Figure 14-22 Launching an element manager




                                        Chapter 14. Using TotalStorage Productivity Center for Fabric   723
Figure 14-23 shows the management tool for the IBM 2109 after being launched from
               Productivity Center for Fabric.




               Figure 14-23 Switch management




724   IBM TotalStorage Productivity Center: Getting Started
14.2.5 Explore view
           Along with the Productivity Center for Fabric Topology View, you can view your SAN
           environment with a Windows Explore type view. By clicking the Submap Explorer button in
           the center of the toolbar, you see a view like the example in Figure 14-24. The Navigation
           Tree button shows a flowchart-type view of the Productivity Center for Fabric views.




           Figure 14-24 Explorer view



14.3 Topology views
           The standard IP-based IBM Tivoli NetView root map contains IP Internet and SmartSets
           submaps. Productivity Center for Fabric adds a third submap, called Storage Area Network,
           to allow the navigation through your discovered SAN. Figure 14-25 shows the NetView root
           map with the addition of Productivity Center for Fabric.




                                         Chapter 14. Using TotalStorage Productivity Center for Fabric   725
Figure 14-25 IBM Tivoli NetView root map

               The Storage Area Network submap (shown in Figure 14-26) displays an icon for each
               available topology view. There will be a SAN view icon for each discovered SAN fabric (three
               in our case), a Device Centric View icon, and a Host Centric View icon.




               Figure 14-26 Storage Area Network submap




726   IBM TotalStorage Productivity Center: Getting Started
You can see in this figure that we had three fabrics. They are named Fabric1, Fabric3, and
          Fabric4, since we have changed their label using SAN → SAN Properties as explained in
          “Properties” on page 736.

          Figure 14-27 shows the complete list of views available. In the following sections we will
          describe the content of each view.



               Topology views
                                                                  Tivoli NetView root map


                                                                  Storage Area Network


                                                   SAN view                                 Device Centric view   Host Centric view

                                                                                                  Devices
                                   Topology view                        Zone view                                      Hosts
                                                                                             (storage servers)

                        Switches          Interconnect elements           Zones                   LUNs                Platform


                        Elements           Elements (switches)          Elements                   Host             Filesystems


                                                                                                 Platform             Volumes


          Figure 14-27 Topology views


14.3.1 SAN view
          The SAN view allows you to see the SAN topology at the fabric level. In this case we clicked
          the Fabric1 icon shown in Figure 14-26 on page 726. The display in Figure 14-28 appears,
          giving access to two further submaps:
             Topology view
             Zone view




          Figure 14-28 Storage Area Network view


                                                     Chapter 14. Using TotalStorage Productivity Center for Fabric                    727
Topology view
               The topology view is used to display all elements of the fabric including switches, hosts,
               devices, and interconnects. As shown on Figure 14-29, this particular fabric has two switches.




               Figure 14-29 Topology view

               Now, you can click a switch icon to display all the hosts and devices connected to the
               selected switch (Figure 14-30).




               Figure 14-30 Switch submap


728   IBM TotalStorage Productivity Center: Getting Started
On the Topology View (shown in Figure 14-29 on page 728) you can also click Interconnect
Elements to display information about all the switches in that SAN (Figure 14-31).




Figure 14-31 Interconnect submap

The switch submap (Figure 14-30), shows that six devices are connected to switch
ITSOSW1. Each connection line represents a logical connection. Click a connection bar twice
to display the exact number of physical connections (Figure 14-32).
We now see that, for this example, SOL-E is connected to two ports on the switch ITSOSW1.




Figure 14-32 Physical connections view




                               Chapter 14. Using TotalStorage Productivity Center for Fabric   729
When the connection represents only one physical connection (or, if we click one of the two
               connections shown in Figure 14-32), NetView displays its properties panel (Figure 14-33).




               Figure 14-33 NetView properties panel


               Zone view
               The Zone view submap displays all zones defined in the SAN fabric. Our configuration
               contains two zones called FASTT and TSM (Figure 14-34).




               Figure 14-34 Zone view submap


730   IBM TotalStorage Productivity Center: Getting Started
Click twice on the FASTT icon to see all the elements included in the FASTT zone
           (Figure 14-35).




           Figure 14-35 FASTT zone

           In lab1, the FASTT zone contains five hosts and one storage server. We have installed
           TotalStorage Productivity Center for Fabric Agents on the four hosts that are labelled with
           their correct hostname (BRAZIL, GALLIUM, SICILY and SOL-E). For the fifth host, LEAD, we
           have not installed the agent. However, it is discovered since it is connected to the switch.
           Productivity Center for Fabric displays it as a host device, and not as an unknown device,
           because the QLogic HBA drivers installed on LEAD support RNID. This RNID support gives
           the ability for the switch to get additional information, including the device type (shown by the
           icon displayed), and the WWN. The disk subsystem is shown with a question mark because
           the FAStT700 was not yet fully supported (with the level of code available at the time of
           writing) and Productivity Center for Fabric was not able to determine all the properties from
           the information returned by the inband and outband agents.


14.3.2 Device Centric View
           You may have several SAN fabrics with multiple storage servers. The Device Centric View
           (accessed from the Storage Area Network view, as shown in Figure 14-26 on page 726),
           displays the storage devices connected to your SANs and their relationship to the hosts. This
           is a logical view as the connection elements are not shown. Because of this, you may prefer
           to see this information using the NetView Explorer interface as shown in Figure 14-36. This
           has the advantage of automatically displaying all the lower level items for Device Centric View
           listed in Example 14-27 on page 727 simultaneously, such as LUNs and Host.




                                           Chapter 14. Using TotalStorage Productivity Center for Fabric   731
Figure 14-36 Device Centric View

               In the preceding figure, we can see the twelve defined LUNs and the host to which they have
               been allocated. The dependency tree is not retrieved from the FAStT server but is
               consolidated from the information retrieved from the managed hosts. Therefore, the
               filesystems are not displayed as they can be spread on several LUNs and this information is
               transparent to the host. Note that the information is also available for the MSS storage server,
               the other disk storage device in our SAN.


14.3.3 Host Centric View
               The Host Centric View (accessed from the Storage Area Network view, as shown in
               Figure 14-26 on page 726) displays all the hosts in the SAN and their related local and
               SAN-attached storage devices. This is a logical view that does not show the interconnect
               elements (and runs across the fabrics). Since this is also a logical view, like the Device
               Centric View, the NetView Explorer presents a more comprehensive display (Figure 14-37).




732   IBM TotalStorage Productivity Center: Getting Started
Figure 14-37 Host Centric View for Lab 1

           We see our four hosts and all their local filesystems whether they are locally or SAN-attached.
           NFS-mounted filesystems and shared directories are not displayed. Since no agent is running
           on LEAD, it is not shown in this view.


14.3.4 iSCSI discovery
           For this environment we will reference SAN Lab 2 (“Lab 2 environment” on page 752).

           Starting discovery
           You can discover and manage devices that use the iSCSI storage networking protocol
           through Productivity Center for Fabric using IBM Tivoli NetView. Before discovery, SNMP and
           the iSCSI MIBs must be enabled on the iSCSI device, the Tivoli NetView IP Discovery must
           be enabled. See 14.11, “Real-time reporting” on page 786 for enabling IP discovery.

           The IBM Tivoli NetView nvsniffer daemon will discover the iSCSI devices. Depending on
           the iSCSI operation chosen, a corresponding iSCSI SmartSet will be created under the IBM
           Tivoli NetView SmartSets icon. By default, the nvsniffer utility runs every 60 minutes. Once
           nvsniffer discovers a iSCSI device, it creates an iSCSI SmartSet located on the NetView
           Topology map at the root level.

           The user can select what type of iSCSI device is discovered. From the menu bar, click Tools
            → iSCSI Operations menu and select Discover All iSCSI Devices, Discover All iSCSI
           Initiators or Discover All iSCSI Targets, as shown in Figure 14-38.

           For more details about iSCSI, refer to 14.12, “Productivity Center for Fabric and iSCSI” on
           page 810.




                                           Chapter 14. Using TotalStorage Productivity Center for Fabric   733
Figure 14-38 iSCSI discovery

               Double-click the iSCSI SmartSet icon to display all iSCSI devices.

               Once all iSCSI devices are discovered by NetView, the iSCSI SmartSet can be managed from
               a high level. Status for iSCSI devices is propagated to the higher level, as described in 14.1.9,
               “Status propagation” on page 711. If you detect a problem, drill to the SmartSet icon and
               continue drilling through the iSCSI icon to determine what iSCSI device is having the
               problem. Figure 14-39 shows an iSCSI SmartSet.




               Figure 14-39 iSCSI SmartSet


14.3.5 MDS 9000 discovery
               The Cisco MDS 9000 is a family of intelligent multilayer directors and fabric switches that
               have such features as: virtual SANs (VSANs), advanced security, sophisticated debug
               analysis tools and an element manager for SAN management.

               Productivity Center for Fabric has enhanced compatibility for the Cisco MDS 9000 Series
               switch. Tivoli NetView displays the port numbers in a format of SSPP, where SS is the slot
               number and PP is the port number. The Launch Application menu item is available for the
               Cisco switch. When the Launch Application is selected, the Cisco Fabric Manager application
               is started. For more details, see 14.7.1, “Cisco MDS 9000 discovery” on page 745.


734   IBM TotalStorage Productivity Center: Getting Started
14.4 SAN menu options
          In this section we describe some of the menu options contained under the SAN pull-down
          menu option for Productivity Center for Fabric.


14.4.1 SAN Properties
          As shown in Figure 14-40, select an object and use SAN → SAN Properties to display the
          properties gathered by Productivity Center for Fabric. In this case we are selecting a
          particular filesystem (the root filesystem) from the Agent SOL-E.




          Figure 14-40 SAN Properties menu

          This will display a SAN Properties window that is divided into two panes. The left pane always
          contains Properties, and may also contain Connection and Sensors/Events, depending on
          the type of object being displayed. The right pane contains the details of the object.

          These are some of the device types that give information in the SAN Properties menu:
             Disk drive
             Hdisk
             Host file system
             LUN
             Log volume
             OS
             Physical volume
             Port
             SAN


                                         Chapter 14. Using TotalStorage Productivity Center for Fabric   735
Switch
                  System
                  Tape drive
                  Volume group
                  Zone

               Properties
               The first grouping item is named Properties and contains generic information about the
               selected device. The information that is displayed depends on the object type. This section
               shows at least the following information:
                  Label: The label of the object as it is displayed by Productivity Center for Fabric. If you
                  update this field, this change will be kept over all discoveries.
                  Icon: The symbol representing the device type. If the object is of an unknown type, this
                  field will be in read-write mode and you will be able to select the correct symbol.
                  Name: The reported name of the device.

               Figure 14-41 shows the Properties section for a filesystem. You can see that it displays the
               filesystem name and type, the mount point, and both the total and available space. Since a
               filesystem is not related to a port connection and also does not return sensor events, only the
               Properties section is available.




               Figure 14-41 Productivity Center for Fabric Properties — Filesystem

               Figure 14-42 shows the Properties section for a host. You can see that it displays the
               hostname, the IP address, the hardware type, and information about the HBA. Since the host
               does not give back sensor related events, only the Properties and Connections sections are
               available.




736   IBM TotalStorage Productivity Center: Getting Started
Figure 14-42 Productivity Center for Fabric Properties — Host

Figure 14-43 shows the Properties section for a switch. You can see that it displays fields
including the name, the IP address, and the WWN. The switch is a connection device and
sends back information about the events and the sensors. Therefore, all three item groups
are available (Properties, Connections, and Sensors/Events).




Figure 14-43 Productivity Center for Fabric Properties — Switch




                                Chapter 14. Using TotalStorage Productivity Center for Fabric   737
Figure 14-44 shows the properties for an unknown device. Here you can change the icon to a
               predefined one by using the pull-down field Icon. You can also change the label of a device
               even if the device is of a known type.




               Figure 14-44 Changing icon and name of a device


               Connection
               The second grouping item, Connections shows all ports in use for the device. This section
               appears only when it is appropriate to the device displayed — switch or host.

               On Figure 14-45, we see the Connection tab for one switch where six ports are used. Port 0 is
               used for the Inter-Switch Link (ISL) to switch ITSOSW2. This is a very useful display, as it
               shows which device is connected on each switch port.




               Figure 14-45 Connection information




738   IBM TotalStorage Productivity Center: Getting Started
Sensors/Events
         The third grouping item, Sensors/Events, is shown in Figure 14-46. It shows the sensors
         status and the device events for a switch. It may include information about fans, batteries,
         power supplies, transmitter, enclosure, board, and others.




         Figure 14-46 Sensors/Events information



14.5 Application launch
         Many SAN devices have vendor-provided management applications. Productivity Center for
         Fabric provides a launch facility for many of these.




                                        Chapter 14. Using TotalStorage Productivity Center for Fabric   739
14.5.1 Native support
               For some supported devices, Productivity Center for Fabric will automatically discover and
               launch the device-related administration tool. To launch, select the device and then click SAN
                → Launch Application.

               This will launch the Web application associated with the device. In our case, it launches the
               Brocade switch management Web interface for the switch ITSOSW1, shown in Figure 14-47.




               Figure 14-47 Brocade switch management application


14.5.2 NetView support for Web interfaces
               For devices that have not identified their management application, IBM Tivoli NetView allows
               you to manually configure the launch of a Web interface for any application, by doing the
               following actions:
                  Right-click the device and select Object Properties from the context-sensitive menu.
                  On the dialog box, select the Other tab (shown in Figure 14-48).
                  Select LANMAN from the pull-down menu.
                  Check isHTTPManaged.
                  Enter the URL of the management application in the Management URL field.
                  Click Verify, Apply, OK.




740   IBM TotalStorage Productivity Center: Getting Started
Figure 14-48 NetView objects properties — Other tab

After this, you can launch the Web application by right-clicking the object and then selecting
Management Page, as shown in Figure 14-49.




Figure 14-49 Launch of the management page


 Important: This definition will be lost if your device is removed from the SAN and
 subsequently rediscovered, since it will be a new object for NetView.




                               Chapter 14. Using TotalStorage Productivity Center for Fabric   741
14.5.3 Launching TotalStorage Productivity Center for Data
               The TotalStorage Productivity Center for Data interface can be started by using TotalStorage
               Productivity Center for Fabric NetView console. To do this, select SAN → Storage Resource
               Manager, as shown in Figure 14-50.




               Figure 14-50 Launch Tivoli Storage Resource Manager

               The user properties file contains an SRMURL setting that defaults to the fully qualified host
               name of Tivoli Storage Area Network Manager. This default assumes that both TotalStorage
               Productivity Center for Disk and TotalStorage Productivity Center for Fabric are installed on
               the same machine. If TotalStorage Productivity Center for Data is installed on a separate
               machine, you can modify the SRMURL value to specify the host name of the TotalStorage
               Productivity Center for Data machine. For instructions on how to do this, please refer to the
               manual IBM Tivoli Storage Area Network Manager User’s Guide, SC23-4698.

               If the following conditions are true, you can start the TotalStorage Productivity Center for Data
               graphical interface from the Tivoli NetView console:
                  TotalStorage Productivity Center for Data or the TotalStorage Productivity Center for Data
                  graphical interface is installed on the same machine as TotalStorage Productivity Center
                  for Fabric, or the SRMURL value specifies the hostname of TotalStorage Productivity
                  Center for Data.
                  The TotalStorage Productivity Center for Fabric is currently running.

               For more information on TotalStorage Productivity Center for Data, see the redbook IBM Tivoli
               Storage Resource Manager: A Practical Introduction, SG24-6886.


14.5.4 Other menu options
               For the other options on the SAN pull-down menu:
                  Configure Agents is covered in *“Configuring the outband agents” on page 346 and
                  “Checking inband agents” on page 348.
                  Configure Manager is covered in “Performing an initial poll and setting up the poll interval”
                  on page 349.
                  Set Event Destination is covered in “Configuring SNMP” on page 342
                  ED/FI Properties and ED/FI Configuration are covered in “Configuration for ED/FI - SAN
                  Error Predictor” on page 818.


742   IBM TotalStorage Productivity Center: Getting Started
14.6 Status cycles
         Figure 14-51 shows the typical color change status cycles which reflect normal operation as a
         device goes down and comes up. Table 14-1 on page 709 and Table 14-2 on page 709 list the
         meanings of the different colors.



                                            NEW
                                           GREEN
                                                            Clear History



                                    Device down                      NORMAL
                                                                      GREEN
                                                   Device down


                                                              Device up
                                          MISSING
                                            RED

         Figure 14-51 IBM Tivoli SAN Manager — normal status cycle

         If you do not manually use NetView capabilities to change status, the status of a Tivoli SAN
         Manager object goes from green to red and from red to green.

         Note that the only difference between an object in the NORMAL/GREEN and NEW/GREEN
         status is in the Status field under SAN Properties (see Figure 14-42 on page 737 for an
         example). A new object will have New in the field and a normal object will show Normal. The
         icon displayed in the topology map will look identical in both cases.

         You can encounter situations where your device is down for a known reason such as an
         upgrade or hardware replacement and you don’t want it displayed with a missing/red status.
         You can use the NetView Unmanage function to set its color as tan to avoid having the yellow
         or red status reported and propagated in the topology display. See Figure 14-52.




                                        Chapter 14. Using TotalStorage Productivity Center for Fabric   743
IBM Tivoli SAN Manager status cycle (with Unmanage)
                                                    NORMAL
                                                      TAN
                                                         Manage /
                                                         Unmanage

                                                    NORMAL
                                                    GREEN
                               Device up                Device down


                                                    MISSING
                                                      RED
                                                                         Clear History
                                                        Manage /
                                                        Unmanage

                                                    MISSING                               NOT DISCOVERED
                                                                     Clear History        / NOT DISPLAYED
                                                      TAN


               Figure 14-52 Status cycle using Unmanage function

               However, when a device is unmanaged and you do a SAN → Configure Manager → Clear
               History to remove historical data, the missing device will be removed from the Productivity
               Center for Fabric database and will no longer be reported until it is up back with a new/green
               status. If you have changed the label of the device, and it is re-discovered after a Clear
               History, it will reappear with the default generated name, as this information is not saved.
               See Figure 14-53.



                 IBM Tivoli SAN Manager status cycle (with Acknowledge)
                                                       NORMAL
                                                        GREEN

                                           Device
                                           down



                                                        MISSING                          Device
                                                                                          up
                                                          RED

                                                         Ack/Unack



                                                        MISSING
                                                      DARK GREEN

               Figure 14-53 Status cycle using Acknowledge function

               You can use the NetView Acknowledge function to specify that you have been notified about
               the problem and that you are currently searching for more information or for a solution. This
               will set the device’s color as dark green to avoid having the yellow or red status reported and
               propagated in the topology display. Subsequently, you can use the Unacknowledge function
               to return in the normal status and colors cycle. When the device becomes available, it will
               automatically return to the normal reporting cycle.



744   IBM TotalStorage Productivity Center: Getting Started
14.7 Practical cases
          We have re-created some typical errors that can happen in a production environment to see
          how and why Productivity Center for Fabric reacts to them.

          We have also used different configurations of the inband and outband agents and correlated
          the results with the explanations.

14.7.1 Cisco MDS 9000 discovery
          In this section we discuss the discovery of the Cisco MDS 9509, which is part of the MDS
          9000 family. Our MDS 9509 is a multilayer switch/director with a 6 slot configuration. We have
          one 16 port card and one 32 port card running at 2GB/s. Discovery of the MDS 9509 is
          performed using inband management.

          Figure 14-54 is the lab environment used to demonstrate the following discovery. We will call
          this Lab environment 3.




                                                                                              Cisco 9509
                                                                    Sanan
                                                                ITSANM Agent




                                   Intranet


                 Lochness
               SAN Manager                                         Sanxc1




                                                                   Sanxc2




                                                                   Sanan3

          Figure 14-54 Lab environment 3

          We first deployed an Productivity Center for Fabric Agent to SANAN. Once the agent was
          installed, it registered with the Productivity Center for Fabric - LOCHNESS and discovered
          the CISCO1 (MDS 9509). The topology in Figure 14-55 was displayed after deploying the
          agent.

           Note: In order to discover the MDS 9000, at least one Productivity Center for Fabric Agent
           must be installed on a host attached to the MDS 9000. Outband management is not
           supported for the MDS 9000.


                                           Chapter 14. Using TotalStorage Productivity Center for Fabric   745
Figure 14-55 Discovery of MDS 9509

               To display the properties of CISCO1, right-click the CISCO1 icon to select it and select SAN
                → SAN Properties. See Figure 14-56.




               Figure 14-56 MDS 9509 properties



746   IBM TotalStorage Productivity Center: Getting Started
The Connection option (Figure 14-57) displays information about the slots and ports where
           the hosts SANXC1, SANXC2 and SANXC3 are connected, as well as the status of each port.




           Figure 14-57 MDS 9509 connections


14.7.2 Removing a connection on a device running an inband agent
           Next, we removed the FC link between the host SICILY and the switch ITSOSW1.
           Productivity Center for Fabric does not show that the device is missing but shows that the
           connection is missing. As the host was running an in-band management agent, the host
           continues to report its configuration to the manager using the IP network. However, the
           attached switch sends a trap to the manager to signal the loss of a link. You can use Monitor
            → Events → All to view the trap received by NetView. Double-click the trap coming from
           ITSOSW1 to see details about the trap as shown in Figure 14-58.




           Figure 14-58 Trap received by NetView

           We see that ITSOSW1 sent a trap to signal that FCPortIndex4 (port number 3) has a status of
           2 (which means Offline).




                                          Chapter 14. Using TotalStorage Productivity Center for Fabric   747
The correlation between the inband information and the trap received is then made correctly
               and only the connection is shown as missing. You can see in Figure 14-59 that the connection
               line has turned red, using the colors referenced in Table 14-1 on page 709.




               Figure 14-59 Connection lost

               We then restored the connection, and following the status cycle explained in Figure 14-51 on
               page 743, the connections returned to normal (Figure 14-60).




               Figure 14-60 Connection restored




748   IBM TotalStorage Productivity Center: Getting Started
Next, we removed one out of the two connections from the host TUNGSTEN to ITSOSW3.
One link is lost, so the connection is now shown as suspect (yellow) – Figure 14-61.




Figure 14-61 Marginal connection

NetView follows its status propagation rules in Table 14-4 on page 711. This connection links
to a submap with the two physical connections. The bottom physical connection is missing
(red), and the other (top) one is normal (black), resulting is propagated status of (yellow) on
the parent map (left hand side). See Figure 14-62.




Figure 14-62 Dual physical connections with different status



                                 Chapter 14. Using TotalStorage Productivity Center for Fabric   749
14.7.3 Removing a connection on a device not running an agent
               A device with no agent is only detected via its connection to the switch. If the connection is
               broken, the host cannot be discovered. In this case, we unplugged the FC link between the
               host LEAD and the switch ITSOSW2. LEAD is not running either an inband or an outband
               agent — as we can see using SAN → Agents configuration, shown in Figure 14-63.




               Figure 14-63 Agent configuration

               After removing the link on LEAD and we received a standard Windows missing device popup
               (Figure 14-64) indicating it could no longer see its FC-attached disk device.




               Figure 14-64 Unsafe removal of Device



750   IBM TotalStorage Productivity Center: Getting Started
Productivity Center for Fabric shows the device as Missing (the icon changes to red — see
the color status listing in Table 14-1 on page 709) — as it is no longer able to determine the
status of the device (see Figure 14-65).




Figure 14-65 Connection lost on a unmanaged host

In Figure 14-66, the host is Unmanaged (tan) status since we decided to unmanage it.




Figure 14-66 Unmanaged host




                               Chapter 14. Using TotalStorage Productivity Center for Fabric   751
We finally select SAN → Configure Manager → Clear History (See Figure 14-67).




               Figure 14-67 Clear History

               After the next discovery, as explained in Figure 14-52 on page 744, the host is no longer
               displayed (Figure 14-68), since it has been removed from the Productivity Center for Fabric
               database.




               Figure 14-68 NetView unmanaged host not discovered


14.7.4 Powering off a switch
               In this test we power off a SAN switch and observe the results.

               Lab 2 environment
               For demonstration purposes in the following sections, this lab is referenced as Lab 2. The
               configuration consists of:
                  Two IBM 2109-S08 (ITSOSW1 and ITSOSW2) switches with firmware V2.6.0g



752   IBM TotalStorage Productivity Center: Getting Started
One IBM 2109-S16 (ITSOSW3) switch with firmware V2.6.0g
                             One IBM 2109-F16 (ITSOSW4) switch with firmware V3.0.2
                             One IBM 2107-G07 SAN Data Gateway
                             Two pSeries 620 (BANDA, KODIAK) running AIX 5.1.1 with:
                             – Two IBM 6228 cards
                             One IBM pSeries F50 (BRAZIL) running AIX 5.1.1ML4 with:
                             – One IBM 6227 card with firmware 02903291
                             – One IBM 6228 card with firmware 02C03891
                             One HP Server running HP-UX 11.0
                             – One FC HBA
                             Four Intel® servers (TONGA, PALAU, WISLA, LOCHNESS)
                             Two Intel servers (DIOMEDE, SENEGAL) with:
                             – Two QLogic QLA2200 card with firmware 8.1.5.12
                             One IBM xSeries 5500 (BONNIE) with:
                             – Two QLogic QLA2300 card with firmware 8.1.5.12
                             One IBM Ultrium Scalable Tape Library (3583)
                             One IBM TotalStorage FAStT700 storage server

                      Figure 14-69 shows the SAN topology of our lab environment.


                                                                                                                 Senegal - Win2k Srv sp3



                                                                                              LTO 3583                                         lochness Win2k Srv sp3
                                                                                                                                                   SAN Manager
                                                                                     SDG

                               banda - AIX 5.1
                                                              PO W ERFAU L DA T AL AR M
                                                                          T    A




                                                                                                                   Easter - HPUX
       tonga - Win2k Srv sp3                                                                                            11

                                                                                                  sw4
                                                                                                                                               wisla - Win2k Srv sp3

                                     bonnie - Win2k Srv sp3


                                                                                                 sw2
      palau- Win2k Srv sp3
                                                                                                  sw1



                                                                                                                    fastT700
                                    Kodiak - AIX
                                        5.1                                                                      brazil - AIX 5.1

                                                                                                  sw3

                                                                                                                                                          iSCSI




                                  clyde - Win2k Srv sp3                                                               diomede- Win2k Srv sp3
     NAS 200




Figure 14-69 SAN lab - environment 2


                                                                                           Chapter 14. Using TotalStorage Productivity Center for Fabric                753
We have powered off the switch ITSOSW4, with managed host SENEGAL enabled. The
               topology map reflects this as shown in Figure 14-70. The switch and all connections change
               to red.




               Figure 14-70 Switch down Lab 2

               The agent running on the managed host (SENEGAL) has scanners listening to the HBAs
               located in the host. Those HBAs detect that the attached device, ITSOSW4, is not active
               since there is no signal from ITSOSW4. The information is retrieved by the scanners and
               reported back to the manager through the standard TCP/IP connection. Since the switch is
               not active, the hosts can no longer access the storage servers.

               The active agent (SENEGAL) sends the information to the manager which triggers a new
               discovery. Since the switch does no longer responds to outband management, Productivity
               Center for Fabric will correlate all the information and as a result, the connections between
               the managed hosts and the switch, and the switch itself, are shown as red/missing. The
               storage server is shown as green/normal because of a second Fibre Channel connection to
               ITSOSW2. ITSOSW2 is also green/normal because of the outband management being
               performed on this switch.

               The active agent host is still reported as normal/green as it sends its information to the
               Manager through the TCP/IP network. Therefore the Manager can determine that only the
               agent’s switch connections, not the host itself, is down.




754   IBM TotalStorage Productivity Center: Getting Started
Now, we powered the switch on again. At startup, the switch sends a trap to the manager.
This trap will cause the manager to ask for a new discovery. The result is shown in
Figure 14-71.




Figure 14-71 Switch up Lab 2

Now, following the status propagation detailed in 14.6, “Status cycles” on page 743, all the
devices are green/normal.




                               Chapter 14. Using TotalStorage Productivity Center for Fabric   755
14.7.5 Running discovery on a RNID-compatible device
               When you define a host for inband management, the topology scanner will launch inband
               queries to all attached HBAs. The remote HBAs, if they support RNID, will send back
               information such as device type. On switch ITSOSW2 is a Windows host, CLYDE, with a
               QLogic card at the requested driver level. There is no agent installed on this host. We see
               however that it is discovered as a host rather than as an Unknown device, as shown in
               Figure 14-72, because of the HBA RNID support.




               Figure 14-72 RNID discovered host

               You can see under the SAN Properties window, Figure 14-73, that the RNID support only
               provides the device type (Host) and the WWN. Compare this with the SAN Properties
               window for a managed host, shown in Figure 14-42 on page 737.




               Figure 14-73 RNID discovered host properties




756   IBM TotalStorage Productivity Center: Getting Started
To have a more explicit map, we put CLYDE in the Label field (using the method shown in
Figure 14-44) and the host is now displayed with its new label.




Figure 14-74 RNID host with changed label




                               Chapter 14. Using TotalStorage Productivity Center for Fabric   757
14.7.6 Outband agents only
               To see what happens if there are only outband agents, that is, with no Productivity Center for
               Fabric agents running, we stopped all the running inband agents, cleared the Productivity
               Center for Fabric configuration, by using SAN → Configure Agent → Remove button, and
               then re-configured the outband agents on the switches ITSOSW1, ITSOSW2, and ITSOSW4
               as shown in Figure 14-75.




               Figure 14-75 Only outband agents

               When configuring the agents, we also used the Advanced button to enter the administrator
               userid and password for the switches. This information is needed by the scanners to obtain
               administrative information such as zoning for Brocade switches.




758   IBM TotalStorage Productivity Center: Getting Started
Productivity Center for Fabric discovers the topology by scanning the three registered
switches. This is shown in Figure 14-76. The information about the attached devices is limited
to the WWN of the device since this information is retrieved from the switch and there is no
other inband management. Note the ‘-’ signs next to Device Centric and Host Centric Views
— this information is retrieved only by the inband agent so is not available to us here.




Figure 14-76 Explorer view with only outband agents

Figure 14-77 shows the information retrieved from the switches (SAN Properties).




Figure 14-77 Switch information retrieved using outband agents




                                Chapter 14. Using TotalStorage Productivity Center for Fabric   759
14.7.7 Inband agents only
               For this practical case, we first unplugged all Fibre Channel connections from all agents and
               we removed all the outband agents from the configuration using SAN → Configure Agents
                → Remove tab. We then forced a new poll. As expected, the agents returned only
               information about the node and the local filesystems, shown in Figure 14-78. Note the ‘-’ sign
               in front of /data01 for host SICILY. The filesystem is defined but not mounted, as the Fibre
               Connections are not active.




               Figure 14-78 Inband agents only without SAN connections

               We reconnected the Fibre Channel connections to all agents into the switch and forced a new
               polling. We now see that all agents reported information about their filesystems. Since the
               agents are connected to a switch, the inband agents will retrieve information from it using
               inband management. That explains why we see all the devices including those without agents
               installed. Figure 14-79 shows that:
                  Our four inband agents (BRAZIL, GALLIUM, SICILY, SOL-E) are recognized.
                  The two switches ITSOSW1 and ITSOSw2 are found, since agents are connected to
                  them.
                  Device 1000006045161FF5 is displayed since it is connected to the switch ITSOSW1. The
                  device type is Unknown, as there is no inband nor outband agent on this device.




760   IBM TotalStorage Productivity Center: Getting Started
Figure 14-79 Inband agents only with SAN connections

We can also display SAN Properties as shown in Figure 14-80.




Figure 14-80 Switches sensor information

We now have no zoning information available since this is retrieved from the switch outband
Agent for the 2109 switch. This is indicated by the — sign next to Zone View on Figure 14-79.


                               Chapter 14. Using TotalStorage Productivity Center for Fabric   761
14.7.8 Disk devices discovery
               The topology scanner will launch inband queries to all attached HBAs. The Attribute scanner
               will then do a SCSI request to get attribute information about the remote devices. Due to LUN
               masking, the storage server will deny all requests if there are no LUNs defined for the
               querying host. Figure 14-81 shows how our SAN topology is mapped when there is an IBM
               MSS storage server but with no LUNs defined or accessible for the hosts in the same fabric.
               The storage server is shown as an Unknown device because the inband agents were not
               allowed to do SCSI requests to the storage servers as they had no assigned LUNs.




               Figure 14-81 Discovered SAN with no LUNS defined on the storage server




762   IBM TotalStorage Productivity Center: Getting Started
Figure 14-82 shows that the host CRETE is not included in the MSS zone (we have enabled
the outband agent for the switch in order to display zone information). This zone includes
TUNGSTEN, which has no LUNs defined on the MSS.




Figure 14-82 MSS zoning display

We changed the MSS zone to include the CRETE server. We run cfgmgr on CRETE so that it
scans its configuration and finds the disk located on the MSS as shown in Example 14-1.

Example 14-1 cfgmgr to discover new disks
# lspv
hdisk0         00030cbf4a3eae8a     rootvg
hdisk1         00030cbf49153cab     None
hdisk2         00030cbf170d8baa     datavg
hdisk3         00030cbf170d9439     datavg
# cfgmgr
# lspv
hdisk0         00030cbf4a3eae8a     rootvg
hdisk1         00030cbf49153cab     None
hdisk2         00030cbf170d8baa     datavg
hdisk3         00030cbf170d9439     datavg
hdisk4         00030cbf8c071018     None




                               Chapter 14. Using TotalStorage Productivity Center for Fabric   763
Now, the agent on CRETE is able to run SCSI commands on the MSS and discovers that it is
               a storage server. Productivity Center for Fabric maps it correctly in Figure 14-83.




               Figure 14-83 MSS zone with CRETE and recognized storage server


14.7.9 Well placed agent strategy
               The placement of inband and outband agents will determine the information displayed:
                  For a topology map, you need to define inband and outband agents on some selected
                  servers and switches in order to discover all your topology. Switch zoning and LUN
                  masking may restrict access to some devices.
                  For a complete topology map, including correct device icons, you need to define inband
                  and outband on all servers and switches, except on those supporting RNID.
                  For information on zones, you need to define the switches as outband agents and set the
                  user ID and password on the Advanced properties.
                  For a complete Device Centric and Host Centric views, you need to place inband agents
                  on all servers you want to be displayed.

               Before implementing inband and outband agents, you should have a clear idea of your
               environment and the information you want to collect. This will help you to select the agents
               and may minimize overhead caused by inband and outband agents.

               In our configuration, we decided to place one agent on GALLIUM which is connected to the
               two fabrics and has LUNs assigned on the FAStT storage server (Figure 14-84).




764   IBM TotalStorage Productivity Center: Getting Started
Figure 14-84 “Well-placed” agent configuration

The agent will use inband management to:
   Query the directly attached devices.
   Query the name server of the switches to get the list of other attached devices.
   Launch inband management to other devices to get their WWN and device type (for RNID
   compatible supported drivers).
   Launch SCSI request to get LUN information from storage servers.

You can see in Figure 14-85 that the agent on GALLIUM has returned information on:
   Directly attached switches (ITSOSW1 and ITSOSW4)
   Devices attached to those switches (if they are in the same zones)
   LUNs defined on the FAStT for this server
   Its own filesystems

Because of the other hosts, only CLYDE runs with RNID compatible drivers, all other devices
— excluding switches and FAStT storage server — are displayed with an unknown device
icon. However, we have shown how we can get a complete map of our SAN by deploying just
one inband agent.




                                Chapter 14. Using TotalStorage Productivity Center for Fabric   765
Figure 14-85 Discovery process with one well-placed agent



14.8 Netview
               In this section we describe how to use the NetView program’s predefined performance
               applications and how to create your own applications to monitor the Storage Area Network
               performance. The NetView program helps you manage performance by providing several
               ways to track and collect Fibre Channel MIB objects. You can use performance information in
               any of the following ways:
                  Monitoring the network for signs of potential problems
                  Resolving network problems
                  Collecting information for trend analysis
                  Allocating network resources
                  Planning future resource acquisition

               The data collected by the NetView program is based on the values of MIB objects. The
               NetView program provides applications that display performance information:
                  NetView Graph displays MIB object values in graphs.
                  Other NetView tools display MIB object values in tables or forms.




766   IBM TotalStorage Productivity Center: Getting Started
14.8.1 Reporting overview
          The NetView MIB Tool Builder enables you to create applications that collect, display, and
          save real-time MIB data. The MIB Data Collector provides a way to collect and analyze
          historical MIB data over long periods of time to give you a more complete picture of your
          network’s performance. We will explain the SNMP concepts and standards, demonstrate the
          creation of Data Collections and the use of the MIB Tool Builder as it applies to SAN network
          management.

          Figure 14-86 lists the topics we cover in this overview section.


                NetView Reporting Overview
                Understanding SNMP and MIBs

                Configuring
                     MIBs (copying and loading)
                     IBM 2109
                     NetView

                MIB Data Collector

                MIB Tool Builder

                NetView Graphing tool
          Figure 14-86 Overview


14.8.2 SNMP and MIBs
          The Simple Network Management Protocol (SNMP) has become the de facto standard for
          internet work (TCP/IP) management. Because it is a simple solution, requiring little code to
          implement, vendors can easily build SNMP agents for their products. SNMP is extensible,
          allowing vendors to easily add network management functions to their existing products.
          SNMP also separates the management architecture from the architecture of the hardware
          devices, which broadens the base of multi vendor support. SNMP is widely implemented and
          available today.

          SNMP network management system contains two primary elements:
             Manager — This is the console through which the network administrator performs network
             management functions.
             Agents — These are the entities that interface to the actual device being managed.
             Switches and directors are examples of managed devices that contain managed objects

           Important:
              In our configuration, the SNMP manager is NetView and the SNMP agents are IBM
              2109 Fibre Channel Switches.

          These objects are arranged in what is known as the Management Information Base (MIB).
          SNMP allows managers and agents to communicate for the purpose of accessing these
          objects. Figure 14-87 provides an overview of the SNMP architecture.


                                         Chapter 14. Using TotalStorage Productivity Center for Fabric   767
SNMP architecture

                                                                                                  iSCSI
                                                                                                   MIB



                                    Application Server
                                      iSCSI Initiator

                                                                                              SNMP agent




                                                                                IP Storage




                                                              Ethernet
                                                                               iSCSI Target

                                        Desktop
                                      iSCSI Initiator
                                                                          2109 Fibre Channel switch




                                                                         SNMP agent
                                                                                                FC switch MIB
                                                                                                 FA/FC MIB
                                                                                                   FE MIB




                                     Tivoli SAN Manager
                                           “NetView”

               Figure 14-87 SNMP architecture overview

               A typical SNMP manager performs the following tasks:
                  Queries agents
                  Gets responses from agents
                  Sets variables in agents
                  Acknowledges asynchronous events from agents

               A typical SNMP agent performs the following tasks:
                  Stores and retrieves management data as defined by the MIB
                  Signals an event to the manager

               MIBs supported by NetView
               NetView supports the following types of MIBs:
                  Standard MIB: All devices that support SNMP are also required to support a standard set
                  of common managed object definitions, of which a MIB is composed. The standard MIB
                  object definitions, MIB-I and MIB-II, enable you to monitor and control SNMP managed
                  devices. Agents contain the intelligence required to access these MIB values.
                  Enterprise-specific MIB: SNMP permits vendors to define MIB extensions, or
                  enterprise-specific MIBs, specifically for controlling their products. These
                  enterprise-specific MIBs must follow certain definition standards, just as other MIBs must,
                  to ensure that the information they contain can be accessed and modified by agents. The
                  NetView program provides the ability to load enterprise-specific MIBs from a MIB
                  description file. By loading a MIB description file containing enterprise-specific MIBs on an
                  SNMP management station, you can monitor and control vendor devices.

                Note: We are using the Brocade 2.6 enterprise-specific MIBs for SAN network
                performance reporting and the IBM TotalStorage IP Storage 200i iSCSI MIB



768   IBM TotalStorage Productivity Center: Getting Started
MIB tree structure
          MIB objects are logically organized in a hierarchy called a tree structure. Each MIB object has
          a name derived from its location in the tree structure. This name, called an object ID, is
          created by tracing the path from the top of the tree structure, or the root, to the bottom, the
          object itself. Each place where the path branches is called a node. A node can have both a
          parent and children. If a node has no children, it is called a leaf node. A leaf node is the actual
          MIB object. Only leaf nodes return MIB values from agents. The MIB tree structure is shown
          in Figure 14-88. Note the leaf entry for bcsi which has been added into the tree.

          For more information regarding SNMP MIB tree structures. See the following Web sites
          relating to SNMP RFCs.
             http://silver.he.net/~rrg/snmpworld.htm
             http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/snmp.htm


                                                               TOP




                                   CCITT (0)                                            JOINT-ISO-CCITT (2)
                                                               ISO (1)


                    STD (0)         REG AUTHORITY
                                          (1)
                                                                             ORG (3)
                                                         MEMBER BODY
                                                             (2)


                                                                              DOD (6)



                                                                                        INTERNET (1)



                                                                                                      PRIVATE (4)
                              DIRECTORY (1)         MGMT (2)         EXPERIMENTAL (3)


                                                                                                        ENTERPRISE (1)
                                                    MIB (1)




                                                    RESERVED (0)                            IBM (2)     bcsi (1588)   iSCSI

          Figure 14-88 MIB tree structure



14.9 NetView setup and configuration
          In this section we provide step by step details for copying and loading the Fibre Channel and
          iSCSI MIBs into NetView. We then describe the FE MIB and SW MIB in the Brocade 2109
          Fibre Channel switch and also describe the FC (Fibre Alliance) MIB in the IBM TotalStorage
          IP Storage 200i device.

           Note: The FC (Fibre Alliance) MIB is shipped with most Fibre Channel switch vendors.
           Brocade Communications provides limited support for the FC MIB.


14.9.1 Advanced Menu
          In order to enable certain advanced features in NetView, we must first enable the Advanced
          Menu feature in the Options pull-down menu as shown in Figure 14-89. Shut down and
          restart NetView for the changes to take effect.

                                               Chapter 14. Using TotalStorage Productivity Center for Fabric                  769
Figure 14-89 Enabling the advanced menu


14.9.2 Copy Brocade MIBs
               Before MIBs can be loaded into NetView, they must first be copied into the osovsnmp_mibs
               directory. All vendor specific MIBS are located here.

               We accessed the Brocade MIBs from the Web site:
                  http://www.brocade.com/support/mibs_rsh/index.jsp

               We downloaded the MIBs below and copied them to the directory.
                  v2_6trp.mib (Enterprise Specific trap)
                  v2_6sw.mib (Fibre Channel Switch)
                  v2_6fe.mib (Fabric Element)
                  v2_6fa.mib (Fibre Alliance)

                Note: If you have unloaded all the MIBs in the MIB description file (usrovsnmp_mibs),
                you must load MIB-I or MIB-II before you can load any enterprise-specific MIBs. These are
                loaded by default in NetView.

               In Example 14-2 we show the usrovsnmp_mibs directory listing with our newly added MIBs.

               Example 14-2 MIB directory
               Directory of C:usrovsnmp_mibs

               04/13/2002   09:33a               81,253 v2_6FA.mib
               08/27/2002   02:45p               79,095 v2_6FE.mib
               04/13/2002   09:33a               60,139 v2_6SW.mib
               04/13/2002   09:33a                5,240 v2_6TRP.mib
                               4 File(s)         225,727 bytes


770   IBM TotalStorage Productivity Center: Getting Started
0 Dir(s)   6,595,670,016 bytes free

          C:usrovsnmp_mibs>



14.9.3 Loading MIBs
          After copying the MIBs to the appropriate directory, they must then be loaded into NetView.

          IBM 2109
          The IBM 2109 comes configured to use the MIB II-private MIB (TRP-MIB), FC Switch MIB
          (SW-MIB), Fibre Alliance MIB (FA-MIB) and Fabric Element MIB (FE-MIB). By default, the
          MIBs are not enabled. Here is a description of each MIB and their respective groupings.

          MIB II-private MIB (v2_6trp.mib or TRP-MIB)
          The object types in MIB-II are organized into the following groupings:
             The System Group
             The Interfaces Group
             The Address Translation Group
             The IP Group
             The ICMP Group
             The TCP Group
             The UDP Group
             The EGP Group
             The Transmission Group
             The SNMP Group

          FC_MGMT (Fibre Alliance) MIB (v2_6fa.mib or FA-MIB)
          The object types in FA-MIB are organized into the following groupings. Currently Brocade
          does not write any performance related data into the OIDs for this MIB.
             Connectivity
             Trap Registration
             Revision Number
             Statistic Set

          Fabric Element MIB (v2_6fe.mib or FE-MIB)
          The object types in FE-MIB are organized into these groupings:
             Configuration
             Operational
             Error
             Accounting
             Capability

          FC Switch MIB (v2_6sw.mib or SW-MIB)
          The object types in SW-MIB are organized into the following groupings:
             swSystem
             swFabric
             swActCfg
             swFCport
             swNs
             swEvent
             swFwSystem
             swEndDevice


                                         Chapter 14. Using TotalStorage Productivity Center for Fabric   771
To enable the MIBs for the IBM/Brocade switch, log into the switch via a telnet session, using
               an ID with administrator privilege (for example, the default admin ID). We enabled all four of
               the above MIBS using the snmpmibcapset command. The command can either disable or
               enable a specific MIB within the switch. Example 14-3 shows output from the snmpmibcapset
               command.

               Example 14-3 snmpmibcapset command on IBM 2109
               itsosw2:admin> snmpmibcapset
               The SNMP Mib/Trap Capability has been set to support
                FE-MIB SW-MIB FA-MIB SW-TRAP FA-TRAP SW-EXTTRAP
               FA-MIB (yes, y, no, n): [yes]
               SW-TRAP (yes, y, no, n): [yes]
               FA-TRAP (yes, y, no, n): [yes]
               SW-EXTTRAP (yes, y, no, n): [yes]
               no change
               itsosw2:admin>


               NetView
               The purpose of loading a MIB is to define the MIB objects so the NetView program’s
               applications can use those MIB definitions. The MIB you are interested in must be loaded on
               the system where you want to use the MIB Data Collector or MIB Tool Builder. Some
               vendor’s specific MIBs are already loaded into NetView.

               Since we want to collect performance MIB objects types for the Brocade 2109 switch, we will
               load its MIB. On the NetView interface, select Tools → MIB → Loader SNMP V1. This will
               launch the MIB Loader interface as shown in Figure 14-90.




               Figure 14-90 MIB loader interface




772   IBM TotalStorage Productivity Center: Getting Started
Each MIB that you load adds a subtree to the MIB tree structure. You must load MIBs in order
of their interdependencies. We loaded the v2_6TRP.MIB first by clicking Load then selecting
the TRP.MIB from the usrovsnmp_mibs directory — see Figure 14-91.




Figure 14-91 Select and load TRP.MIB

Click Open and the MIB will loaded into NetView. Figure 14-92 shows the MIB loading
indicator.




Figure 14-92 Loading MIB




                              Chapter 14. Using TotalStorage Productivity Center for Fabric   773
We then loaded the v2_6 SW.MIB, v2_6FE.MIB and v2_6FA.MIBs in turn using the same
               process. You must load the MIBs in order of their interdependencies. A MIB is dependent on
               another MIB if its highest node is defined in the other MIB.

               After the MIBs are loaded, we now verify that we are able to traverse the MIB tree and select
               objects from the enterprise-specific MIB. We used the NetView MIB Browser to traverse the
               branches of the above MIBs. Click Tools → MIB → Browser SNMP v1. to launch the MIB
               browser and use the Down Tree button to navigate down through a MIB (see Figure 14-93).




               Figure 14-93 NetView MIB Browser



14.10 Historical reporting
               NetView provides a graphical reporting tool that can be used against real-time and historical
               data. After loading the Brocade (IBM 2109) MIBs into NetView, we demonstrate how to
               compile historical performance data about the IBM 2109 by using the NetView MIB Data
               Collector and querying the MIB referred to in 14.9.3, “Loading MIBs” on page 771. This tool
               enables us to manipulate data in several ways, including:
                  Collect MIB data from the IBM 2109 at regular intervals.
                  Store MIB data about the IBM 2109.
                  Define thresholds for MIB data and generate events when the specified thresholds are
                  exceeded. Setting MIB thresholds enables us to automatically monitor important SAN
                  performance parameters to help report, detect and isolate trends or problems.




774   IBM TotalStorage Productivity Center: Getting Started
Brocade 2109 MIBs and MIB objects
           We now need to understand what MIB objects to collect. The IBM 2109 has four MIBs loaded
           and enabled, described in 14.9.3, “Loading MIBs” on page 771. We selected the MIB object
           identifiers in Figure 14-94 because of their importance in managing SAN network
           performance. SAN network administrators may want to specify other MIB object identifiers to
           meet their own requirements for performance reporting. You should consult your
           vendor-specific MIB documentation for details of the objects in the MIB. We will describe how
           to create a MIB Data Collector for the following object identifiers in the following MIBs, shown
           in Figure 14-94 and Figure 14-95.


                 FE-MIB Error Group

                 fcFXPortLinkFailures   - Number of link failures detected by this FxPort

                 fcFXPortSyncLosses - Number of loss of synchronization detected by
                                         the FxPort

                 fcFXPortSigLosses - Number of signal losses detected by the FxPort.

           Figure 14-94 FE-MIB — Error Group




                SW-MIB Port Table Group
               swFcPortTXWords - Number of FC words transmitted by the port

               swFcPortRXWords - Number of FC words received by the port

               swFcPortTXFrames - Number of FC frames transmitted by the port

               swFcPortRXFrames - Number of FC frames received by the port

               swFcPortTXC2Frames - Number of Class 2 frames received by the port

               swFcPortTXC3Frames - Number of Class 3 frames received by the port

           Figure 14-95 SW MIB — Port Table Group


14.10.1 Creating a Data Collection
           Our first Data Collection will target the MIB object swFCPortTxFrames. The
           swFCPortTxFrames counts the number of Fibre Channel frames that the port has
           transmitted. It contains group information about the physical state, operational status,
           performance and error statistics of each Fibre Channel port on the switch for example,
           F_Port, E_Port, U_Port, FL_Port.




                                           Chapter 14. Using TotalStorage Productivity Center for Fabric   775
Figure 14-96 describes the MIB tree where this object identifier resides. The root of the tree,
               bcsi, stands for Brocade Communication Systems Incorporated.

               The next several pages describe the step-by-step process for defining a Data Collection on
               the swFcPortTxFrames MIB object identifier using NetView.


                                                 bcsi (1588)


                                                    commDev (2)




                                                   Fibre channel (1)




                                                     fcSwitch (1)




                                                        sw (1)




                                                                       swFCPort (6)




                                                                            swFCPortTable (2)


                      IBM 2109
                                                                            swFCPortEntry (1)

                      private MIB tree
                                                                                                swFCPortTxFrames (13)


               Figure 14-96 Private MIB tree for bcsi

               1. To create the NetView Data Collection, select Tools → MIB → Collect Data from the
                  NetView main menu. The MIB Data Collector interface displays (Figure 14-97). Select
                  New to create a collection.




               Figure 14-97 MIB Data Collector GUI




776   IBM TotalStorage Productivity Center: Getting Started
2. If creating the first Data Collection, you will also see the pop-up in Figure 14-98 to start the
   Data Collection daemon. Click Yes to start the SNMPCollect daemon.




Figure 14-98 starting the SNMP collect daemon

3. The Data Collection Wizard GUI then displays (Figure 14-99). This is the first step in
   creating a new Data Collection. By default NetView has navigated down to the Internet
   branch of the tree (.iso.org.dod.internet). See Figure 14-88 on page 769 for the overall
   tree structure. Highlight private and click Down Tree to navigate to the private MIB.




Figure 14-99 internet branch of MIB tree

   We have now reached the private branch of the MIB tree (.iso.org.dod.internet.private).
   See Figure 14-100.




                                Chapter 14. Using TotalStorage Productivity Center for Fabric   777
Figure 14-100 Private arm of MIB tree

               4. Continue to navigate down the enterprise branch of the tree by clicking Down Tree.
                  Figure 14-101 shows the enterprise branch of the tree
                  (.iso.org.dod.internet.private.enterprise).




               Figure 14-101 Enterprise branch of MIB tree



778   IBM TotalStorage Productivity Center: Getting Started
5. We reach the bcsi branch of the tree by clicking Down Tree. Figure 14-102 shows the bcsi
   (Brocade) branch of the tree (.iso.org.dod.internet.private.enterprise.bcsi).




Figure 14-102 bcsi branch of MIB tree




                                Chapter 14. Using TotalStorage Productivity Center for Fabric   779
6. We continue to navigate down the tree, using the path shown in Figure 14-96,
                  and, as shown in Figure 14-103 on page 780, eventually reaching:
                  .iso.org.dod.internet.private.enterprise.bcsi.commDev.fibrechannel.fcSwitch.sw.swFCport.
                  swFCPort.swFCPortEnrty.swFCPortTxFrames.




               Figure 14-103 swFCPortTxFrames MIB object identifier

               7. We selected swFCPortTxFrames and clicked OK. We received the following pop-up
                  (Figure 14-104) from the collection wizard. This pop-up occurs because this will be the
                  first node added to this collection. NetView then adds the swFCTxFrames MIB Data
                  Collection definition as a valid data collector entry.




               Figure 14-104 Adding the nodes




780   IBM TotalStorage Productivity Center: Getting Started
This launches the Add Nodes to the Collection Dialog, which is the second step in creating
   a new Data Collection. See Figure 14-105.




Figure 14-105 Add Nodes to the Collection Dialog

8. We proceeded to customize the section Collect MIB Data from fields, using the following
   steps:
   a. We entered the switch node name for which we wanted to collect performance data (in
      this case, ITSOSW2.ALMADEN.IBM.COM) and clicked Add Node. You can add a
      node either by selecting it on the topology map or typing in the field as the IP address
      or hostname for the device. Also, you can select multiple devices on the topology map
      and click Add Selected Nodes from Map. This adds all the selected nodes selected
      on the topology map to the Collect MIB Data From field. We also added several
      nodes to the collection by adding one device at a time in the Node field and clicking
      Add Node. To remove the node, just click the node name in the list and click Remove.
   b. We then customized the section Set the Polling Properties for these Nodes, using
      the following steps:
      i. We changed the Poll Nodes Every field to 5 minutes. This specifies the frequency
         in which the nodes are polled.

           Important: Before setting the polling interval, you should have a clear
           understanding of available and used bandwidth in your network. Shorter polling
           intervals generate more SNMP data on the network.

      ii. We checked Store MIB Data. This will store the MIB data that is collected to
          C:/usr/ov/databases.
      iii. The Check Threshold if box was checked. This will define the arm threshold. We
           want to collect data and signal an event each time more than 200 frames are sent
           on a particular port. Since we checked this box, we will be required to define the
           trap value and rearm number fields.
      iv. The option then send Trap Number was configured. We used the default setting,
          which is the MIB-II enterprise-specific trap.


                                Chapter 14. Using TotalStorage Productivity Center for Fabric   781
v. We then configured and rearm When. We specified a rearm value of greater than
                        or equal to 75%. of the arm threshold value. This means that a trap will be
                        generated and sent when the number of TX frames reaches 150. Note that these
                        traps are NetView-specific traps (separate from Productivity Center for Fabric traps)
                        and will therefore be sent to the NetView console.
               9. Click OK to create the new Data Collection, shown in Figure 14-106. Select the
                  swFCPortTxFrames Data Collection and click Collect.




               Figure 14-106 Newly added Data Collection for swFCTxFrames


                Note: It could take up to 2 minutes before the newly defined Data Collection is being
                collected by NetView. To verify that data is being captured, navigate to:
                c:usrovdatabasessnmpcollect. If there are files present, then the Data Collection is
                functioning properly.

               10.Click Close and the Stop and restart Collection dialog is displayed as in Figure 14-107.
                  Click Yes to recycle the snmpcollect daemon. At this point the Data Collection status
                  (Figure 14-106 above) should change from Suspended to To be Collected




               Figure 14-107 Restart the collection daemon

               We are now collecting the data swFCTxFrames on ITSOSW2. Depending upon the level of
               granularity that is required for your reporting needs, you may want to collect data over shorter
               or longer periods. In our lab we collected every 5 minutes, but you may want to collect data
               once every hour for a week or once every hour for a month.

               We will now use the NetView Graph tool to display the data collected as described in 14.10.4,
               “NetView Graph Utility” on page 784.




782   IBM TotalStorage Productivity Center: Getting Started
Note: We followed the same procedure to add the remaining metrics for Data Collection
           swFCRxFrames, swFCTxErrors, and swFCRxErrors. For demonstration purposes we used
           a of 50 for an arm threshold and a value of 75% for re-arm. Your values for arm/re-arm may
           differ from what we used.


14.10.2 Database maintenance
          You can periodically purge the Data Collection entries by selecting Options → Server
          Setup, the Files tab page, then select Schedule SNMP Files to Delete from the drop-down
          list. See Figure 14-108. Select the Purge day at a specific time.




          Figure 14-108 Purge Data Collection files


           Important: There are documented steps on how to perform important maintenance of
           Tivoli NetView. Refer to the IBM Redbook Tivoli NetVIew and Friends, SG24-6019.




                                          Chapter 14. Using TotalStorage Productivity Center for Fabric   783
14.10.3 Troubleshooting the Data Collection daemon
               If you find data is not being collected, ensure that the snmpCollect daemon is running and
               that there is space available in the collection file system usrovdatabasessnmpcollect. The
               daemon can stop running if there is no filesystem space.

               To verify that the daemon is running, type ovstatus snmpcollect from the DOS command
               prompt (see Example 14-4).

               Example 14-4 snmpcollect daemon running
               C:>ovstatus snmpcollect
                object manager name: snmpcollect
                behavior:            OVs_WELL_BEHAVED
                state:               RUNNING
                PID:                 1536
                last message:        Initialization complete.
                exit status:         -

               Done

               C:>


               If the snmpcollect daemon is not running, you will see a state value of NOT RUNNING from the
               ovstatus snmpcollect command as shown in Example 14-5.

               Example 14-5 snmpcollect daemon stopped
               C:>ovstatus snmpcollect
                object manager name: snmpcollect
                behavior:            OVs_WELL_BEHAVED
                state:               NOT RUNNING
                PID:                 1536
                last message:        Exited due to user request.
                exit status:         -

               Done

               C:>


               The snmpcollect daemon can be started manually. At a command prompt, we typed in
               ovstart snmpcollect. You will see the output shown in Example 14-6. We then issued an
               ovstatus snmpcollect for verification, as shown in Example 14-4.

               Example 14-6 snmpcolllect started
               C:>ovstart snmpcollect
               Done

               C:>


                Note: If no Data Collections are currently defined to the MIB Data Collector tool, the
                snmpcollect daemon will not run.


14.10.4 NetView Graph Utility
               We used the NetView graph utility to display the MIB object data that we collected in 14.10.1,
               “Creating a Data Collection” on page 775.


784   IBM TotalStorage Productivity Center: Getting Started
We used the NetView Graph tool to display the collected data. This provides a convenient
way to display numerical performance information on collected data.

We now show how to display the collected data from the previous Data Collection that was
built for ITSOSW2 (swFCPortTxFrames). We start by single-clicking ITSOSW2 on the
NetView topology map (Figure 14-109).




Figure 14-109 Select ITSOSW2

Select Tools → MIB → Graph Data to launch the graph utility This will report on the historical
data that has been collected on ITSOSW2. After selecting this, NetView takes some time to
process the data and present it in the graphical display. The graph build time depends on the
amount of data collected. Figure 14-110 shows the progress indicator.




Figure 14-110 Building graph

After the graph is built, it displays the swFCTxFrames data that was collected (Figure 14-111).
Note there are multiple instances of the object ID mapped — that is, swFCPortTxFrames.1,
swFCPortTxFrames.2 and so on. In this case they represent the data collected for each port
in the switch.




                               Chapter 14. Using TotalStorage Productivity Center for Fabric   785
Figure 14-111 Graphing of swFCTxFrames

               For viewing purposes, we adjusted the x-axis for Time by clicking Edit → Graph Properties
               in the open graph window. This allowed us to zoom into shorter time periods. See
               Figure 14-112.




               Figure 14-112 Graph properties

               Any MIB object identifier that has been collected using the NetView MIB Data Collector can
               be graphed using the NetView Graph facility using the above process.



14.11 Real-time reporting
               In section we introduce the NetView MIB Tool Builder for real-time reporting. Figure 14-113
               provides an overview.




786   IBM TotalStorage Productivity Center: Getting Started
Describe the MIB Tool Builder

                Use of the Tool Builder
                     build
                     modify
                     delete
           Figure 14-113 Real-time reporting — Tool Builder overview


            Important: Depending on the configuration, some advanced functionality may be initially
            disabled in NetView under Tivoli SAN Manager. This section requires this functionality to
            be enabled. To enable all functionality required, in NetView, click Options → Polling and
            check the Poll All Nodes field. This is shown in Figure 14-114.




           Figure 14-114 Enabling all functions in NetView


14.11.1 MIB Tool Builder
           In section we introduce the NetView MIB Tool Builder. The Tool Builder enables you to build,
           modify, and delete MIB applications. MIB Applications are programs used by NetView to
           monitor the network. The Tool Builder allows you to build MIB applications without
           programming. The MIB Application monitors the real-time performance of specific MIB
           objects on a regular basis and produces output such as forms, tables, or graphs.



                                           Chapter 14. Using TotalStorage Productivity Center for Fabric   787
We will demonstrate how to build a a MIB application that will query the swFCPortTxFrames
               MIB object identifier in the SW-MIB. This process can used to query any SNMP enabled
               device using NetView.

               With the switch ITSOSW2 selected, we start building the MIB Application by launching the
               Tool Builder. Select Tools → MIB → Tool Builder → New. The MIB Tool Builder interface
               is launched — as in Figure 14-115. Click New to create a new Tool Builder entry for collecting
               data on ITSOSW2.




               Figure 14-115 MIB tool Builder interface

               The Tool Builder Wizard Step1 window is displayed (Figure 14-116). We entered
               FCPortTxFrames in the Title field and clicked in the Tool ID field to auto populate the
               remaining fields. We clicked Next to continue with the wizard.




               Figure 14-116 Tool Wizard Step 1




788   IBM TotalStorage Productivity Center: Getting Started
The Tool Wizard Step 2 interface displays. You can see our title of FCPortTxFrames has
carried over. We are now ready to select the display type. We can choose between Forms,
Tables, or Graphs. We will choose Graph and click New as shown in Figure 14-117.




Figure 14-117 Tool Wizard Step 2

The NetView MIB Browser is now displayed. We will use the MIB Browser to navigate down
to the FCPortTxFrames object identifier. Use the Down Tree button to navigate through the
MIB tree. Figure 14-118 shows the path through the SW-MIB error table. Click OK to add the
object identifier.


    SW MIB - Port Table group

            private...
              enterprise...
                bcsi...
                  commDev...
                    fibrechannel...
                       fcSwitch...
                         sw...
                          swFcPort...
                             swFcPortTable...
                               swFCPortTxFrames
Figure 14-118 SW-MIB — Port Table




                               Chapter 14. Using TotalStorage Productivity Center for Fabric   789
The newly created MIB application is displayed in the Tool Builder Step 2 of 2 window. See
               Figure 14-119 for the completed MIB Application. Click OK to complete the definition.




               Figure 14-119 Final step of Tool Wizard

               Now, the final window for the Tool Builder is displayed. It shows the newly created MIB
               application in the window, Figure 14-120. Click Close to close the window. The new MIB
               Application has been successfully created.




               Figure 14-120 New MIB application — FXPortTXFrames




790   IBM TotalStorage Productivity Center: Getting Started
14.11.2 Displaying real-time data
           Now that we have a MIB application, we want to collect real-time data from the switch. Select
           ITSOSW2 from the NetView topology map by single clicking the ITSOSW2 symbol, then
           select Monitor → Other → FCPortTXFrames. Our MIB application FCPortTXFrames has
           been added to the menu (shown in Figure 14-121).




           Figure 14-121 Monitor pull-down menu

           Clicking on the FCPortTXFrames option, launches a graph utility, shown in Figure 14-122.




           Figure 14-122 NetView Graph starting

           The collection of MIB data starts immediately after selecting the swFCPortTXFrames MIB
           application from the Monitor → Other menu. Figure 14-123 shows the data being collected
           and displayed for each MIB instance of the ITSOSW2.




                                          Chapter 14. Using TotalStorage Productivity Center for Fabric   791
Figure 14-123 Graph of FCPortTXFrames

               The polling interval of the application can be controlled using the Poll Nodes Every field
               located under Edit → Graph Properties. See Figure 14-124.




               Figure 14-124 Graph Properties




792   IBM TotalStorage Productivity Center: Getting Started
This launches a dialog to specify how often NetView Graph receives real-time data for
graphing, shown in Figure 14-125. This determines how often the nodes are asked for data.




Figure 14-125 Polling Interval

We continued to use the Tool Builder process defined in 14.11.1, “MIB Tool Builder” on
page 787 to build additional MIB applications for real-time performance monitoring. We used
the following MIB objects:
   swFcPortTXWords
   swFcPortRXC2Frames
   swFCPortRXC3Frames
   fcFXPortLinkFailures
   fcFXPortSyncLosses
   fcFXPortSigLosses

Figure 14-126 shows the newly defined MIB Applications as they appear in the Tool Builder.




Figure 14-126 Tool Builder with all MIB objects defined




                                 Chapter 14. Using TotalStorage Productivity Center for Fabric   793
Figure 14-127 shows all the above MIB objects as they appear in the NetView Monitor
               pull-down menu. Note we have abbreviated the names of the MIB applications listed in the
               Monitor → Other menu for ease of use.




               Figure 14-127 All MIB objects in NetView


14.11.3 SmartSets
               With Productivity Center for Fabric (Tivoli SAN Manager) providing the management of the
               SAN, we can further extend the management functionality of the SAN from a LAN and iSCSI
               perspective. NetView SmartSets gives us this ability. This section describes the concept of
               the NetView SmartSet. See Figure 14-128 below. For an overview, we provide details on how
               to group and mange your SAN attached resources from an TCP/IP (SNMP) perspective.

               By default, the iSCSI SmartSet is created by Productivity Center for Fabric when nvsniffer is
               enabled. SmartSets for iSCSI “initiators” and “targets” can be created using the process
               described here.


                    What is a SmartSet?

                    Why SmartSets?

                    Defining a SmartSet

                    SmartSets and Data Collections
               Figure 14-128 SmartSet Overview




794   IBM TotalStorage Productivity
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490
Ibm total storage productivity center v2.3 getting started sg246490

More Related Content

PDF
Ibm total storage productivity center v3.1 the next generation sg247194
PDF
Tivoli business systems manager v2.1 end to-end business impact management sg...
PDF
Integrating tivoli products sg247757
PDF
End to-end planning for availability and performance monitoring redp4371
PDF
Tivoli data warehouse 1.2 and business objects redp9116
PDF
Tivoli data warehouse version 1.3 planning and implementation sg246343
PDF
Tivoli management services warehouse and reporting sg247290
PDF
Managing disk subsystems using ibm total storage productivity center sg247097
Ibm total storage productivity center v3.1 the next generation sg247194
Tivoli business systems manager v2.1 end to-end business impact management sg...
Integrating tivoli products sg247757
End to-end planning for availability and performance monitoring redp4371
Tivoli data warehouse 1.2 and business objects redp9116
Tivoli data warehouse version 1.3 planning and implementation sg246343
Tivoli management services warehouse and reporting sg247290
Managing disk subsystems using ibm total storage productivity center sg247097

What's hot (16)

PDF
Developing workflows and automation packages for ibm tivoli intelligent orche...
PDF
Deployment guide series maximo asset mng 7 1
PDF
Deployment guide series ibm tivoli access manager for e business v6.0 sg247207
PDF
Integration guide for ibm tivoli service request manager v7.1 sg247580
PDF
Deployment guide series ibm tivoli composite application manager for web reso...
PDF
Deployment guide series ibm tivoli usage and accounting manager v7.1 sg247569
PDF
Ibm storage infrastructure for business continuityredp4605
PDF
Ibm tivoli monitoring v5.1.1 implementation certification study guide redp3935
PDF
Certification study guide for ibm tivoli configuration manager 4.2 redp3946
PDF
Migrating to netcool precision for ip networks --best practices for migrating...
PDF
Solution deployment guide for ibm tivoli composite application manager for we...
PDF
Automated provisioning using ibm tivoli intelligent orchestrator and enterpri...
PDF
Tape automation with ibm e server xseries servers redp0415
PDF
Ibm tivoli monitoring v5.1.1 implementation certification study guide sg246780
PDF
An introduction to tivoli net view for os 390 v1r2 sg245224
PDF
Deployment guide series ibm tivoli provisioning manager express v4.1 for soft...
Developing workflows and automation packages for ibm tivoli intelligent orche...
Deployment guide series maximo asset mng 7 1
Deployment guide series ibm tivoli access manager for e business v6.0 sg247207
Integration guide for ibm tivoli service request manager v7.1 sg247580
Deployment guide series ibm tivoli composite application manager for web reso...
Deployment guide series ibm tivoli usage and accounting manager v7.1 sg247569
Ibm storage infrastructure for business continuityredp4605
Ibm tivoli monitoring v5.1.1 implementation certification study guide redp3935
Certification study guide for ibm tivoli configuration manager 4.2 redp3946
Migrating to netcool precision for ip networks --best practices for migrating...
Solution deployment guide for ibm tivoli composite application manager for we...
Automated provisioning using ibm tivoli intelligent orchestrator and enterpri...
Tape automation with ibm e server xseries servers redp0415
Ibm tivoli monitoring v5.1.1 implementation certification study guide sg246780
An introduction to tivoli net view for os 390 v1r2 sg245224
Deployment guide series ibm tivoli provisioning manager express v4.1 for soft...
Ad

Similar to Ibm total storage productivity center v2.3 getting started sg246490 (20)

PDF
Ibm total storage productivity center v3.1 the next generation sg247194
PDF
Deployment guide series ibm total storage productivity center for data sg247140
PDF
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
PDF
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
PDF
Ibm information archive architecture and deployment sg247843
PDF
Ibm system storage solutions handbook sg245250
PDF
Ibm system storage solutions handbook
PDF
Ibm total storage san file system sg247057
PDF
Ibm total storage san file system sg247057
PDF
Performance tuning for content manager sg246949
PDF
An introduction to storage provisioning with tivoli provisioning manager and ...
PDF
IBM enterprise Content Management
PDF
Ibm tivoli storage manager in a clustered environment sg246679
PDF
PureFlex pour les MSP
PDF
IBM PureFlex System Solutions for Managed Service Providers
PDF
Ibm total storage tape selection and differentiation guide sg246946
PDF
Ibm total storage tape selection and differentiation guide sg246946
PDF
BOOK - IBM Sterling B2B Integration and Managed File Transfer Solutions
PDF
Certification guide series ibm tivoli netcool omn ibus v7.2 implementation sg...
PDF
Ibm tivoli storage area network manager a practical introduction sg246848
Ibm total storage productivity center v3.1 the next generation sg247194
Deployment guide series ibm total storage productivity center for data sg247140
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
Ibm information archive architecture and deployment sg247843
Ibm system storage solutions handbook sg245250
Ibm system storage solutions handbook
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Performance tuning for content manager sg246949
An introduction to storage provisioning with tivoli provisioning manager and ...
IBM enterprise Content Management
Ibm tivoli storage manager in a clustered environment sg246679
PureFlex pour les MSP
IBM PureFlex System Solutions for Managed Service Providers
Ibm total storage tape selection and differentiation guide sg246946
Ibm total storage tape selection and differentiation guide sg246946
BOOK - IBM Sterling B2B Integration and Managed File Transfer Solutions
Certification guide series ibm tivoli netcool omn ibus v7.2 implementation sg...
Ibm tivoli storage area network manager a practical introduction sg246848
Ad

More from Banking at Ho Chi Minh city (20)

PDF
Postgresql v15.1
PDF
Postgresql v14.6 Document Guide
PDF
IBM MobileFirst Platform v7.0 Pot Intro v0.1
PDF
IBM MobileFirst Platform v7 Tech Overview
PDF
IBM MobileFirst Foundation Version Flyer v1.0
PDF
IBM MobileFirst Platform v7.0 POT Offers Lab v1.0
PDF
IBM MobileFirst Platform v7.0 pot intro v0.1
PDF
IBM MobileFirst Platform v7.0 POT App Mgmt Lab v1.1
PDF
IBM MobileFirst Platform v7.0 POT Analytics v1.1
PDF
IBM MobileFirst Platform Pot Sentiment Analysis v3
PDF
IBM MobileFirst Platform 7.0 POT InApp Feedback V0.1
PDF
Tme 10 cookbook for aix systems management and networking sg244867
PDF
Tivoli firewall magic redp0227
PDF
Tivoli data warehouse version 1.3 planning and implementation sg246343
PDF
Tec implementation examples sg245216
PDF
Tivoli storage productivity center v4.2 release guide sg247894
PDF
Synchronizing data with ibm tivoli directory integrator 6.1 redp4317
PDF
Storage migration and consolidation with ibm total storage products redp3888
PDF
Slr to tivoli performance reporter for os 390 migration cookbook sg245128
PDF
Setup and configuration for ibm tivoli access manager for enterprise single s...
Postgresql v15.1
Postgresql v14.6 Document Guide
IBM MobileFirst Platform v7.0 Pot Intro v0.1
IBM MobileFirst Platform v7 Tech Overview
IBM MobileFirst Foundation Version Flyer v1.0
IBM MobileFirst Platform v7.0 POT Offers Lab v1.0
IBM MobileFirst Platform v7.0 pot intro v0.1
IBM MobileFirst Platform v7.0 POT App Mgmt Lab v1.1
IBM MobileFirst Platform v7.0 POT Analytics v1.1
IBM MobileFirst Platform Pot Sentiment Analysis v3
IBM MobileFirst Platform 7.0 POT InApp Feedback V0.1
Tme 10 cookbook for aix systems management and networking sg244867
Tivoli firewall magic redp0227
Tivoli data warehouse version 1.3 planning and implementation sg246343
Tec implementation examples sg245216
Tivoli storage productivity center v4.2 release guide sg247894
Synchronizing data with ibm tivoli directory integrator 6.1 redp4317
Storage migration and consolidation with ibm total storage products redp3888
Slr to tivoli performance reporter for os 390 migration cookbook sg245128
Setup and configuration for ibm tivoli access manager for enterprise single s...

Recently uploaded (20)

PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
Big Data Technologies - Introduction.pptx
PPT
Teaching material agriculture food technology
PDF
Encapsulation theory and applications.pdf
PDF
cuic standard and advanced reporting.pdf
PPTX
A Presentation on Artificial Intelligence
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
NewMind AI Monthly Chronicles - July 2025
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Cloud computing and distributed systems.
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Review of recent advances in non-invasive hemoglobin estimation
MYSQL Presentation for SQL database connectivity
Big Data Technologies - Introduction.pptx
Teaching material agriculture food technology
Encapsulation theory and applications.pdf
cuic standard and advanced reporting.pdf
A Presentation on Artificial Intelligence
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Building Integrated photovoltaic BIPV_UPV.pdf
NewMind AI Monthly Chronicles - July 2025
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
The Rise and Fall of 3GPP – Time for a Sabbatical?
Cloud computing and distributed systems.
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Unlocking AI with Model Context Protocol (MCP)
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
20250228 LYD VKU AI Blended-Learning.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing

Ibm total storage productivity center v2.3 getting started sg246490

  • 1. Front cover IBM TotalStorage Productivity Center V2.3: Getting Started Effectively use the IBM TotalStorage Productivity Center Learn to install and customize the IBM TotalStorage Productivity Center Understand the IBM TotalStorage Open Software Family Mary Lovelace Larry Mc Gimsey Ivo Gomilsek Mary Anne Marquez ibm.com/redbooks
  • 3. International Technical Support Organization IBM TotalStorage Productivity Center V2.3: Getting Started December 2005 SG24-6490-01
  • 4. Note: Before using this information and the product it supports, read the information in “Notices” on page xiii. Second Edition (December 2005) This edition applies to Version 2, Release 3 of IBM TotalStorage Productivity Center (product number 5608-UC1, 5608-UC3, 5608-UC4, 5608-UC5. © Copyright International Business Machines Corporation 2005. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
  • 5. Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Part 1. IBM TotalStorage Productivity Center foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. IBM TotalStorage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Introduction to IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 IBM TotalStorage Open Software family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data . . . . . . . . . 7 1.3.2 Fabric subject matter expert: Productivity Center for Fabric . . . . . . . . . . . . . . . . . . 9 1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk . . . . . . . . . 12 1.3.4 Replication subject matter expert: Productivity Center for Replication . . . . . . . . . 14 1.4 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4.1 Productivity Center for Disk and Productivity Center for Replication . . . . . . . . . . 17 1.4.2 Event services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.5 Taking steps toward an On Demand environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Chapter 2. Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.1 IBM TotalStorage Productivity Center architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.1 Architectural overview diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.2 Architectural layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.1.3 Relationships between the managers and components . . . . . . . . . . . . . . . . . . . . 31 2.1.4 Collecting data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.2 Standards used in IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 34 2.2.1 ANSI standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.2.2 Web-Based Enterprise Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.2.3 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.2.4 Simple Network Management Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.5 Fibre Alliance MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.3 Service Location Protocol (SLP) overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.3.1 SLP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.3.2 Common Information Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.4 Component interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.4.1 CIMOM discovery with SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.4.2 How CIM Agent works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.5 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.5.1 Tivoli Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.5.2 Common Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Part 2. Installing the IBM TotalStorage Productivity Center base product suite . . . . . . . . . . . . . . . . . 55 Chapter 3. Installation planning and considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . 57 © Copyright IBM Corp. 2005. All rights reserved. iii
  • 6. 3.1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2 Installation prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2.1 TCP/IP ports used by TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . 59 3.2.2 Default databases created during the installation . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.3 Our lab setup environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.4 Pre-installation check list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.5 User IDs and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.5.1 User IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.5.2 Increasing user security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.5.3 Certificates and key files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5.4 Services and service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.6 Starting and stopping the managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.7 Windows Management Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.8 World Wide Web Publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.9 Uninstalling Internet Information Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.10 Installing SNMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.11 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.11.1 The computer name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.11.2 Database considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.11.3 Windows Terminal Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.11.4 Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.11.5 Personal firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.11.6 Changing the HOSTS file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.12 IBM TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.12.1 Server recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.12.2 Supported subsystems and databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.12.3 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.12.4 Creating the DB2 database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Chapter 4. Installing the IBM TotalStorage Productivity Center suite . . . . . . . . . . . . . 83 4.1 Installing the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.1.1 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.2 Prerequisite Software Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2.1 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2.2 Installing prerequisite software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.3 Suite installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.3.1 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.3.2 Installing the TotalStorage Productivity Center suite . . . . . . . . . . . . . . . . . . . . . 110 4.3.3 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . 125 4.3.4 IBM TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . 140 4.3.5 IBM TotalStorage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . 146 4.3.6 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . 157 4.3.7 IBM TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Chapter 5. CIMOM install and configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5.2 Planning considerations for Service Location Protocol . . . . . . . . . . . . . . . . . . . . . . . . 192 5.2.1 Considerations for using SLP DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5.2.2 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 5.3 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.4 Planning considerations for CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.4.1 CIMOM configuration recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5.5 Installing CIM agent for ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 iv IBM TotalStorage Productivity Center V2.3: Getting Started
  • 7. 5.5.1 ESS CLI Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5.5.2 DS CIM Agent install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 5.5.3 Post Installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.6 Configuring the DS CIM Agent for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5.6.1 Registering DS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5.6.2 Registering ESS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 5.6.3 Register ESS server for Copy services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 5.6.4 Restart the CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.6.5 CIMOM user authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.7 Verifying connection to the ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 5.7.1 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 5.7.2 Confirming the ESS CIMOM is available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 5.7.3 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . 221 5.7.4 Configuring TotalStorage Productivity Center for SLP discovery . . . . . . . . . . . . 223 5.7.5 Registering the DS CIM Agent to SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 5.7.6 Verifying and managing CIMOM’s availability. . . . . . . . . . . . . . . . . . . . . . . . . . . 224 5.8 Installing CIM agent for IBM DS4000 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 5.8.1 Verifying and Managing CIMOM availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 5.9 Configuring CIMOM for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 5.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account. . . . . . 235 5.9.2 Registering the SAN Volume Controller host in SLP . . . . . . . . . . . . . . . . . . . . . 241 5.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary . . . . . . 241 5.10.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 5.10.2 Persistency of SLP registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 5.10.3 Configuring slp.reg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Part 3. Configuring the IBM TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk. . . . . . . . . . . 247 6.1 Productivity Center for Disk Discovery summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 6.2 SLP DA definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 6.2.1 Verifying and managing CIMOMs availability . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 6.3 Disk and Replication Manager remote GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 6.3.1 Installing Remote Console for Performance Manager function. . . . . . . . . . . . . . 270 6.3.2 Launching Remote Console for TotalStorage Productivity Center . . . . . . . . . . . 277 Chapter 7. Configuring TotalStorage Productivity Center for Replication . . . . . . . . 279 7.1 Installing a remote GUI and CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Chapter 8. Configuring IBM TotalStorage Productivity Center for Data . . . . . . . . . . 289 8.1 Configuring the CIM Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 8.1.1 CIM and SLP interfaces within Data Manager . . . . . . . . . . . . . . . . . . . . . . . . . . 290 8.1.2 Configuring CIM Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 8.1.3 Setting up a disk alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 8.2 Setting up the Web GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 8.2.1 Using IBM HTTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 8.2.2 Using Internet Information Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 8.2.3 Configuring the URL in Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 8.3 Installing the Data Manager remote console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 8.4 Configuring Data Manager for Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 8.5 Alert Disposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric . . . . . . . . . 319 9.1 TotalStorage Productivity Center component interaction . . . . . . . . . . . . . . . . . . . . . . 320 Contents v
  • 8. 9.1.1 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . 320 9.1.2 SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 9.1.3 Tivoli Provisioning Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 9.2 Post-installation procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 9.2.1 Installing Productivity Center for Fabric – Agent . . . . . . . . . . . . . . . . . . . . . . . . . 321 9.2.2 Installing Productivity Center for Fabric – Remote Console . . . . . . . . . . . . . . . . 331 9.3 Configuring IBM TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . 342 9.3.1 Configuring SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 9.3.2 Configuring the outband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 9.3.3 Checking inband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 9.3.4 Performing an initial poll and setting up the poll interval . . . . . . . . . . . . . . . . . . . 349 Chapter 10. Deployment of agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 10.1 Installing the agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 10.2 Data Agent installation using the installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 10.3 Deploying the agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Part 4. Using the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Chapter 11. Using TotalStorage Productivity Center for Disk. . . . . . . . . . . . . . . . . . . 375 11.1 Productivity Center common base: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 11.2 Launching TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 11.3 Exploiting Productivity Center common base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 11.3.1 Launch Device Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 11.4 Performing volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 11.5 Changing the display name of a storage device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 11.6 Working with ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 11.6.1 ESS Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 11.6.2 Assigning and unassigning ESS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 11.6.3 Creating new ESS volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 11.6.4 Launch device manager for an ESS device . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 11.7 Working with DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 11.7.1 DS8000 Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 11.7.2 Assigning and unassigning DS8000 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . 392 11.7.3 Creating new DS8000 volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 11.7.4 Launch device manager for an DS8000 device . . . . . . . . . . . . . . . . . . . . . . . . 394 11.8 Working with SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 11.8.1 Working with SAN Volume Controller MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . 396 11.8.2 Creating new MDisks on supported storage devices . . . . . . . . . . . . . . . . . . . . 399 11.8.3 Create and view SAN Volume Controller VDisks . . . . . . . . . . . . . . . . . . . . . . . 402 11.9 Working with DS4000 family or FAStT storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 11.9.1 Working with DS4000 or FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 11.9.2 Creating DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 11.9.3 Assigning hosts to DS4000 and FAStT Volumes . . . . . . . . . . . . . . . . . . . . . . . 413 11.9.4 Unassigning hosts from DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . 414 11.9.5 Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 11.10 Event Action Plan Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 11.10.1 Applying an Event Action Plan to a managed system or group . . . . . . . . . . . 421 11.10.2 Exporting and importing Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Chapter 12. Using TotalStorage Productivity Center Performance Manager . . . . . . 427 12.1 Exploiting Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 12.1.1 Performance Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 12.1.2 Performance Manager data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 vi IBM TotalStorage Productivity Center V2.3: Getting Started
  • 9. 12.1.3 Using IBM Director Scheduler function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 12.1.4 Reviewing data collection task status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 12.1.5 Managing Performance Manager Database . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 12.1.6 Performance Manager gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 12.1.7 ESS thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 12.1.8 Data collection for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 12.1.9 SAN Volume Controller thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 12.1.10 Data collection for the DS6000 and DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . 463 12.1.11 DS6000 and DS8000 thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 12.2 Exploiting gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 12.2.1 Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 12.2.2 Creating gauges: an example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 12.2.3 Zooming in on the specific time period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 12.2.4 Modify gauge to view array level metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 12.2.5 Modify gauge to review multiple metrics in same chart. . . . . . . . . . . . . . . . . . . 474 12.3 Performance Manager command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 12.3.1 Performance Manager CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 12.3.2 Sample command outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 12.4 Volume Performance Advisor (VPA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 12.4.1 VPA introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 12.4.2 The provisioning challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 12.4.3 Workload characterization and workload profiles . . . . . . . . . . . . . . . . . . . . . . . 479 12.4.4 Workload profile values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 12.4.5 How the Volume Performance Advisor makes decisions . . . . . . . . . . . . . . . . . 480 12.4.6 Enabling the Trace Logging for Director GUI Interface . . . . . . . . . . . . . . . . . . . 481 12.4.7 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 12.4.8 Creating and managing workload profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 Chapter 13. Using TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . 521 13.1 TotalStorage Productivity Center for Data overview . . . . . . . . . . . . . . . . . . . . . . . . . 522 13.1.1 Business purpose of TotalStorage Productivity Center for Data. . . . . . . . . . . . 522 13.1.2 Components of TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . 522 13.1.3 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 13.2 Functions of TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . 523 13.2.1 Basic menu displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 13.2.2 Discover and monitor Agents, disks, filesystems, and databases . . . . . . . . . . 526 13.2.3 Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 13.2.4 Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 13.2.5 Chargeback: Charging for storage usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 13.3 OS Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 13.3.1 Navigation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 13.3.2 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 13.3.3 Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 13.3.4 Pings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 13.3.5 Probes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 13.3.6 Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 13.3.7 Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552 13.4 OS Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 13.4.1 Alerting navigation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 13.4.2 Computer Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 13.4.3 Filesystem Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 13.4.4 Directory Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 13.4.5 Alert logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564 Contents vii
  • 10. 13.5 Policy management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 13.5.1 Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 13.5.2 Network Appliance Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 13.5.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 13.5.4 Filesystem extension and LUN provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . 576 13.5.5 Scheduled Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582 13.6 Database monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583 13.6.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584 13.6.2 Probes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 13.6.3 Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586 13.6.4 Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587 13.7 Database Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 13.7.1 Instance Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 13.7.2 Database-Tablespace Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 13.7.3 Table Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 13.7.4 Alert log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 13.8 Databases policy management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 13.8.1 Network Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 13.8.2 Instance Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.8.3 Database Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.9 Database administration samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.9.1 Database up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.9.2 Database utilization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.9.3 Need for reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.10 Data Manager reporting capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 13.10.1 Major reporting categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593 13.11 Using the standard reporting functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 13.11.1 Asset Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 13.11.2 Storage Subsystems Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604 13.11.3 Availability Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604 13.11.4 Capacity Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 13.11.5 Usage Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 13.11.6 Usage Violation Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 13.11.7 Backup Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 13.12 TotalStorage Productivity Center for Data ESS Reporting . . . . . . . . . . . . . . . . . . . 634 13.12.1 ESS Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634 13.13 IBM Tivoli Storage Resource Manager top 10 reports . . . . . . . . . . . . . . . . . . . . . . 653 13.13.1 ESS used and free storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 13.13.2 ESS attached hosts report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656 13.13.3 Computer Uptime Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657 13.13.4 Growth in storage used and number of files . . . . . . . . . . . . . . . . . . . . . . . . . . 659 13.13.5 Incremental backup trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661 13.13.6 Database reports against DBMS size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 13.13.7 Database instance storage report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 13.13.8 Database reports size by instance and by computer . . . . . . . . . . . . . . . . . . . 667 13.13.9 Locate the LUN on which a database is allocated . . . . . . . . . . . . . . . . . . . . . 669 13.13.10 Finding important files on your systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672 13.14 Creating customized reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 13.14.1 System Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 13.14.2 Reports owned by a specific username . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686 13.14.3 Batch Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688 13.15 Setting up a schedule for daily reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 13.16 Setting up a reports Web site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698 viii IBM TotalStorage Productivity Center V2.3: Getting Started
  • 11. 13.17 Charging for storage usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 Chapter 14. Using TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . 703 14.1 NetView navigation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704 14.1.1 NetView interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704 14.1.2 Maps and submaps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704 14.1.3 NetView window structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704 14.1.4 NetView Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705 14.1.5 NetView Navigation Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707 14.1.6 Object selection and NetView properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707 14.1.7 Object symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709 14.1.8 Object status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709 14.1.9 Status propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711 14.1.10 NetView and Productivity Center for Fabric integration . . . . . . . . . . . . . . . . . 711 14.2 Walk-through of Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712 14.2.1 Device Centric view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713 14.2.2 Host Centric view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714 14.2.3 SAN view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714 14.2.4 Launching element managers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723 14.2.5 Explore view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725 14.3 Topology views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725 14.3.1 SAN view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727 14.3.2 Device Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 14.3.3 Host Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732 14.3.4 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733 14.3.5 MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734 14.4 SAN menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 14.4.1 SAN Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 14.5 Application launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739 14.5.1 Native support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740 14.5.2 NetView support for Web interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740 14.5.3 Launching TotalStorage Productivity Center for Data. . . . . . . . . . . . . . . . . . . . 742 14.5.4 Other menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742 14.6 Status cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743 14.7 Practical cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745 14.7.1 Cisco MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745 14.7.2 Removing a connection on a device running an inband agent . . . . . . . . . . . . . 747 14.7.3 Removing a connection on a device not running an agent . . . . . . . . . . . . . . . . 750 14.7.4 Powering off a switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752 14.7.5 Running discovery on a RNID-compatible device. . . . . . . . . . . . . . . . . . . . . . . 756 14.7.6 Outband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758 14.7.7 Inband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760 14.7.8 Disk devices discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762 14.7.9 Well placed agent strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764 14.8 Netview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766 14.8.1 Reporting overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767 14.8.2 SNMP and MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767 14.9 NetView setup and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769 14.9.1 Advanced Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769 14.9.2 Copy Brocade MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770 14.9.3 Loading MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771 14.10 Historical reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774 14.10.1 Creating a Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775 Contents ix
  • 12. 14.10.2 Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783 14.10.3 Troubleshooting the Data Collection daemon . . . . . . . . . . . . . . . . . . . . . . . . . 784 14.10.4 NetView Graph Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784 14.11 Real-time reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786 14.11.1 MIB Tool Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787 14.11.2 Displaying real-time data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791 14.11.3 SmartSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794 14.11.4 SmartSets and Data Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802 14.11.5 Seed file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805 14.12 Productivity Center for Fabric and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810 14.13 What is iSCSI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811 14.14 How does iSCSI work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811 14.15 Productivity Center for Fabric and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812 14.15.1 Functional description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813 14.15.2 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813 14.16 ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814 14.16.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814 14.16.2 Error processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816 14.16.3 Configuration for ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . 818 14.16.4 Using ED/FI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820 14.16.5 Searching for the faulted device on the topology map . . . . . . . . . . . . . . . . . . 822 14.16.6 Removing notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825 Chapter 15. Using TotalStorage Productivity Center for Replication. . . . . . . . . . . . . 827 15.1 TotalStorage Productivity Center for Replication overview . . . . . . . . . . . . . . . . . . . . 828 15.1.1 Supported Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828 15.1.2 Replication session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830 15.1.3 Storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831 15.1.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831 15.1.5 Relationship of group, pool, and session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 832 15.1.6 Copyset and sequence concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833 15.2 Exploiting Productivity Center for replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834 15.2.1 Before you start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834 15.2.2 Adding a replication device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834 15.2.3 Creating a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838 15.2.4 Modifying a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841 15.2.5 Viewing storage group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842 15.2.6 Deleting a storage group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843 15.2.7 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844 15.2.8 Modifying a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847 15.2.9 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848 15.2.10 Viewing storage pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849 15.2.11 Creating storage paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850 15.2.12 Point-in-Time Copy - creating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852 15.2.13 Creating a session - verifying source-target relationship . . . . . . . . . . . . . . . . 856 15.2.14 Continuous Synchronous Remote Copy - creating a session. . . . . . . . . . . . . 861 15.2.15 Managing a Point-in-Time copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866 15.2.16 Managing a Continuous Synchronous Remote Copy . . . . . . . . . . . . . . . . . . . 873 15.3 Using Command Line Interface (CLI) for replication . . . . . . . . . . . . . . . . . . . . . . . . . 884 15.3.1 Session details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886 15.3.2 Starting a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888 15.3.3 Suspending a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892 15.3.4 Terminating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893 x IBM TotalStorage Productivity Center V2.3: Getting Started
  • 13. Chapter 16. Hints, tips, and good-to-knows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899 16.1 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900 16.1.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901 16.2 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901 16.2.1 Locations of configured user IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901 16.2.2 Resource Manager registration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902 16.2.3 Tivoli Agent Manager status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902 16.2.4 Registered Fabric Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904 16.2.5 Registered Data Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906 16.3 Launchpad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906 16.3.1 Launchpad installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907 16.3.2 Launchpad customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909 16.4 Remote consoles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911 16.5 Verifying whether a port is in use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911 16.6 Manually removing old CIMOM entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911 16.7 Collecting logs for support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917 16.7.1 IBM Director logfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917 16.7.2 Using Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921 16.7.3 Following Discovery using Windows raswatch utility . . . . . . . . . . . . . . . . . . . . 921 16.7.4 DB2 database checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922 16.7.5 IBM WebSphere tracing and logfile browsing . . . . . . . . . . . . . . . . . . . . . . . . . . 927 16.8 SLP and CIM Agent problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928 16.8.1 Enabling SLP tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929 16.8.2 Device registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930 16.9 Replication Manager problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930 16.9.1 Diagnosing an indications problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931 16.9.2 Restarting the replication environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931 16.10 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931 16.10.1 Enabling WebSphere Application Server trace . . . . . . . . . . . . . . . . . . . . . . . . 932 16.11 ESS user authentication problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940 16.12 SVC Data collection task failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940 Chapter 17. Database management and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . 943 17.1 DB2 database overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944 17.2 Database purging in TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 944 17.2.1 Performance manager database panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945 17.3 IBM DB2 tool suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948 17.3.1 Command Line Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948 17.3.2 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950 17.3.3 General Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950 17.3.4 Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951 17.4 DB2 Command Center overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952 17.4.1 Command Center navigation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952 17.5 DB2 Command Center custom report example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956 17.5.1 Extracting LUN data report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956 17.5.2 Command Center report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959 17.6 Exporting collected performance data to a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976 17.6.1 Control Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976 17.6.2 Data extraction tools, tips and reporting methods. . . . . . . . . . . . . . . . . . . . . . . 979 17.7 Database backup and recovery overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984 17.8 Backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988 Appendix A. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991 User IDs and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992 Contents xi
  • 14. Server information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992 User IDs and passwords for key files and installation. . . . . . . . . . . . . . . . . . . . . . . . . . 993 Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994 IBM TotalStorage Enterprise Storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994 IBM FAStT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995 IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999 xii IBM TotalStorage Productivity Center V2.3: Getting Started
  • 15. Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces. © Copyright IBM Corp. 2005. All rights reserved. xiii
  • 16. Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX® iSeries™ Sequent® Cloudscape™ MVS™ ThinkPad® DB2® Netfinity® Tivoli Enterprise™ DB2 Universal Database™ NetView® Tivoli Enterprise Console® e-business on demand™ OS/390® Tivoli® Enterprise Storage Server® Predictive Failure Analysis® TotalStorage® Eserver® pSeries® WebSphere® Eserver® QMF™ xSeries® FlashCopy® Redbooks™ z/OS® IBM® Redbooks (logo) ™ zSeries® ibm.com® S/390® 1-2-3® The following terms are trademarks of other companies: Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, and service names may be trademarks or service marks of others. xiv IBM TotalStorage Productivity Center V2.3: Getting Started
  • 17. Preface IBM® TotalStorage® Productivity Center is a suite of infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments. It can help reduce the effort of managing complex storage infrastructures, improve storage capacity utilization, and improve administration efficiency. IBM TotalStorage Productivity Center allows you to respond to on demand storage needs and brings together, in a single point, the management of storage devices, fabric, and data. This IBM Redbook is intended for administrators and users who are installing and using IBM TotalStorage Productivity Center V2.3. It provides an overview of the product components and functions. We describe the hardware and software environment required, provide a step-by-step installation procedure, and offer customization and usage hints and tips. This book is not a replacement for the existing IBM Redbooks™, or product manuals, that detail the implementation and configuration of the individual products that make up the IBM TotalStorage Productivity Center, or the products as they may have been called in previous versions. We refer to those books as appropriate throughout this book. The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization (ITSO), San Jose Center. Mary Lovelace is a Consulting IT Specialist at the ITSO in San Jose, California. She has more than 20 years of experience with IBM in large systems, storage and Storage Networking product education, system engineering and consultancy, and systems support. Larry Mc Gimsey is a consulting IT Architect working in Managed Storage Services delivery supporting worldwide SAN storage customers. He has over 30 years experience in IT. He joined IBM 6 years ago as a result of an outsourcing engagement. Most of his experience prior to joining IBM was in mainframe systems support. It included system programming, performance management, capacity planning, system automation and storage management. Since joining IBM, Larry has been working with large SAN environments. He currently works with Managed Storage Services offering and delivery teams to define the architecture used to deliver worldwide storage services. Ivo Gomilsek is an IT Specialist for IBM Global Services, Slovenia, supporting the Central and Eastern European Region in architecting, deploying, and supporting SAN/storage/DR solutions. His areas of expertise include SAN, storage, HA systems, xSeries® servers, network operating systems (Linux, MS Windows, OS/2®), and Lotus® Domino™ servers. He holds several certifications from various vendors (IBM, Red Hat, Microsoft). Ivo has contributed to various other redbooks on Tivoli products, SAN, Linux/390, xSeries, and Linux. Mary Anne Marquez is the team lead for tape performance at IBM Tucson. She has extensive knowledge in setting up a TotalStorage Productivity Center environment for use with Copy Services and Performance Management, as well as debugging the various components of TotalStorage Productivity Center including WebSphere, ICAT, and the CCW interface for ESS. In addition to TPC, Mary Anne has experience with the native Copy Services tools on ESS model-800 and DS8000. She has authored several performance white papers. © Copyright IBM Corp. 2005. All rights reserved. xv
  • 18. Thanks to the following people for their contributions to this project: Sangam Racherla Yvonne Lyon ITSO, San Jose Center Bob Haimowitz ITSO, Raleigh Center Diana Duan Tina Dunton Nancy Hobbs Paul Lee Thiha Than Miki Walter IBM San Jose Martine Wedlake IBM Beaverton Ryan Darris IBM Tucson Doug Dunham Tivoli Storage SWAT Team Mike Griese Technical Support Marketing Lead, Rochester Curtis Neal Scott Venuti Open System Demo Center, San Jose xvi IBM TotalStorage Productivity Center V2.3: Getting Started
  • 19. Become a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html Comments welcome Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an email to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099 Preface xvii
  • 20. xviii IBM TotalStorage Productivity Center V2.3: Getting Started
  • 21. Part 1 Part 1 IBM TotalStorage Productivity Center foundation In this part of the book we introduce the IBM TotalStorage Productivity Center: Chapter 1, “IBM TotalStorage Productivity Center overview” on page 3, contains an overview of the components of IBM TotalStorage Productivity Center. Chapter 2, “Key concepts” on page 27, provides information about the communication, protocols, and standards organization that is the foundation of understanding the IBM TotalStorage Productivity Center. © Copyright IBM Corp. 2005. All rights reserved. 1
  • 22. 2 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 23. 1 Chapter 1. IBM TotalStorage Productivity Center overview IBM TotalStorage Productivity Center is software, part of the IBM TotalStorage open software family, designed to provide a single point of control for managing both IBM and non-IBM networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage Fibre Array Storage Technology (FAStT), IBM TotalStorage DS4000, IBM TotalStorage DS6000, and IBM TotalStorage DS8000 series. TotalStorage Productivity Center is a solution for customers with storage management requirements, who want to reduce the complexities and costs of storage management, including management of SAN-based storage, while consolidating control within a consistent graphical user interface. This chapter provides an overview of the entire IBM TotalStorage Open Software Family. © Copyright IBM Corp. 2005. All rights reserved. 3
  • 24. 1.1 Introduction to IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center consists of software components which enable storage administrators to monitor, configure, and manage storage devices and subsystems within a SAN environment. The TotalStorage Productivity Center is based on the recent standard issued by the Storage Networking Industry Association (SNIA). The standard addresses the interoperability of storage hardware and software within a SAN. 1.1.1 Standards organizations and standards Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interoperability. Figure 1-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible. Figure 1-1 SAN management standards bodies Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema. Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S). 4 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 25. 1.2 IBM TotalStorage Open Software family The IBM TotalStorage Open Software Family, is designed to provide a full range of capabilities, including storage infrastructure management, Hierarchical Storage Management (HSM), archive management, and recovery management. The On Demand storage environment is shown in Figure 1-2. The hardware infrastructure is a complete range of IBM storage hardware and devices providing flexibility in choice of service quality and cost structure. On top of the hardware infrastructure is the virtualization layer. The storage virtualization is infrastructure software designed to pool storage assets, enabling optimized use of storage assets across the enterprise and the ability to modify the storage infrastructure with minimal or no disruption to application services. The next layer is composed of storage infrastructure management to help enterprises understand and proactively manage their storage infrastructure in the on demand world; hierarchical storage management to help control growth; archive management to manage cost of storing huge quantities of data; recovery management to ensure recoverability of data. The top layer is storage orchestration which automates work flows to help eliminate human error. Figure 1-2 Enabling customer to move toward On Demand Chapter 1. IBM TotalStorage Productivity Center overview 5
  • 26. Previously we discussed the next steps or entry points into an On Demand environment. The IBM software products which represent these entry points and which comprise the IBM TotalStorage Open Software Family is shown in Figure 1-3. Figure 1-3 IBM TotalStorage Open Software Family 1.3 IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to On Demand storage needs. The IBM TotalStorage Productivity Center offering is a powerful set of tools designed to help simplify the management of complex storage network environments. The IBM TotalStorage Productivity Center consists of TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Data (formerly Tivoli Storage Resource Manager), and TotalStorage Productivity Center for Fabric (formerly Tivoli SAN Manager). 6 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 27. Taking a closer look at storage infrastructure management (see Figure 1-4), we focus on four subject matter experts to empower the storage administrators to effectively do their work. Data subject matter expert San Fabric subject matter expert Disk subject matter expert Replication subject matter expert Figure 1-4 Centralized, automated storage infrastructure management 1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data The Data subject matter expert has intimate knowledge of how storage is used, for example whether the data is used by a file system or a database application. Figure 1-5 on page 8 shows the role of the Data subject matter expert which is filled by the TotalStorage Productivity Center for Data (formerly the IBM Tivoli Storage Resource Manager). Chapter 1. IBM TotalStorage Productivity Center overview 7
  • 28. Figure 1-5 Monitor and Configure the Storage Infrastructure Data area Heterogeneous storage infrastructures, driven by growth in file and database data, consume increasing amounts of administrative time, as well as actual hardware resources. IT managers need ways to make their administrators more efficient and more efficiently utilize their storage resources. Tivoli Storage Resource Manager gives storage administrators the automated tools they need to manage their storage resources more cost-effectively. TotalStorage Productivity Center for Data allows you to identify different classes of data, report how much space is being consumed by these different classes, and take appropriate actions to keep the data under control. Features of the TotalStorage Productivity Center for Data are: Automated identification of the storage resources in an infrastructure and analysis of how effectively those resources are being used. File-system and file-level evaluation uncovers categories of files that, if deleted or archived, can potentially represent significant reductions in the amount of data that must be stored, backed up and managed. Automated control through policies that are customizable with actions that can include centralized alerting, distributed responsibility and fully automated response. Predict future growth and future at-risk conditions with historical information. Through monitoring and reporting, TotalStorage Productivity Center for Data helps the storage administrator prevent outages in the storage infrastructure. Armed with timely information, the storage administrator can take action to keep storage and data available to the application. TotalStorage Productivity Center for Data also helps to make the most efficient use of storage budgets, by allowing administrators to use their existing storage more efficiently, and more accurately predict future storage growth. 8 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 29. TotalStorage Productivity Center for Data monitors storage assets, capacity, and usage across an enterprise. TotalStorage Productivity Center for Data can look at: Storage from a host perspective: Manage all the host-attached storage, capacity and consumption attributed to file systems, users, directories, and files Storage from an application perspective: Monitor and manage the storage activity inside different database entities including instance, tablespace, and table Storage utilization and provide chargeback information. Architecture The TotalStorage Productivity Center for Data server system manages a number of Agents, which can be servers with storage attached, NAS systems, or database application servers. Information is collected from the Agents and stored in a database repository. The stored information can then be displayed from a native GUI client or browser interface anywhere in the network. The GUI or browser interface gives access to the other functions of TotalStorage Productivity Center for Data, including creating and customizing of a large number of different types of reports and setting up alerts. With TotalStorage Productivity Center for Data, you can: Monitor virtually any host Monitor local, SAN-attached and Network Attached Storage from a browser anywhere on the network For more information refer to the redbook IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886. 1.3.2 Fabric subject matter expert: Productivity Center for Fabric The storage infrastructure management for Fabric covers the Storage Area Network (SAN). To handle and manage SAN events you need a comprehensive tool. The tool must have a single point of operation and it tool must be able to perform all the tasks from the SAN. This role is filled by the TotalStorage Productivity Center for Fabric (formerly the IBM Tivoli SAN Manager) which is a part of the IBM TotalStorage Productivity Center. The Fabric subject matter expert is the expert in the SAN. Its role is: Discovery of fabric information Provide the ability to specify fabric policies – What HBAs to use for each host and for what purpose – Objectives for zone configuration (for example, shielding host HBAs from one another and performance) Automatically modify the zone configuration TotalStorage Productivity Center for Fabric provides real-time visual monitoring of SANs, including heterogeneous switch support, and is a central point of control for SAN configuration (including zoning). It automates the management of heterogeneous storage area networks, resulting in” Improved Application Availability – Predicting storage network failures before they happen enabling preventative maintenance – Accelerate problem isolation when failures do happen Chapter 1. IBM TotalStorage Productivity Center overview 9
  • 30. Optimized Storage Resource Utilization by reporting on storage network performance Enhanced Storage Personnel Productivity - Tivoli SAN Manager creates a single point of control, administration and security for the management of heterogeneous storage networks Figure 1-6 describes the requirements that must be addressed by the Fabric subject matter expert. Figure 1-6 Monitor and Configure the Storage Infrastructure Fabric area TotalStorage Productivity Center for Fabric monitors and manages switches and hubs, storage and servers in a Storage Area Network. TotalStorage Productivity Center for Fabric can be used for both online monitoring and historical reporting. TotalStorage Productivity Center for Fabric: Manages fabric devices (switches) through outband management. Discovers many details about a monitored server and its local storage through an Agent loaded onto a SAN-attached host (Managed Host). Monitors the network and collects events and traps Launches vendor-provided specific SAN element management applications from the TotalStorage Productivity Center for Fabric Console. Discovers and manages iSCSI devices. Provides a fault isolation engine for SAN problem determination (ED/FI - SAN Error Predictor) TotalStorage Productivity Center for Fabric is compliant with the standards relevant to SAN storage and management. 10 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 31. TotalStorage Productivity Center for Fabric components The major components of the TotalStorage Productivity Center for Fabric include: A manager or server, running on a SAN managing server Agents, running on one or more managed hosts Management console, which is by default on the Manager system, plus optional additional remote consoles Outband agents - consisting of vendor-supplied MIBs for SNMP There are two additional components which are not included in the TotalStorage Productivity Center. IBM Tivoli Enterprise™ Console (TEC) which is used to receive TotalStorage Productivity Center for Fabric generated events. Once forwarded to TEC, These can then be consolidated with events from other applications and acted on according to enterprise policy. IBM Tivoli Enterprise Data Warehouse (TEDW) is used to collect and analyze data gathered by the TotalStorage Productivity Center for Fabric. The Tivoli Data Enterprise Warehouse collects, organizes, and makes data available for the purpose of analysis in order to give management the ability to access and analyze information about its business. The TotalStorage Productivity Center for Fabric functions are distributed across the Manager and the Agent. TotalStorage Productivity Center for FabricServer Performs initial discovery of environment: – Gathers and correlates data from agents on managed hosts – Gathers data from SNMP (outband) agents – Graphically displays SAN topology and attributes Provides customized monitoring and reporting through NetView® Reacts to operational events by changing its display (Optionally) forwards events to Tivoli Enterprise Console® or SNMP managers TotalStorage Productivity Center for Fabric Agent Gathers information about: SANs by querying switches and devices for attribute and topology information Host-level storage, such as file systems and LUNs Event and other information detected by HBAs Forwards topology and event information to the Manager Discover SAN components and devices TotalStorage Productivity Center for Fabric uses two methods to discover information about the SAN - outband discovery, and inband discovery. Outband discovery is the process of discovering SAN information, including topology and device data, without using the Fibre Channel data paths. Outband discovery uses SNMP queries, invoked over IP network. Outband management and discovery is normally used to manage devices such as switches and hubs which support SNMP. Chapter 1. IBM TotalStorage Productivity Center overview 11
  • 32. In outband discovery, all communications occur over the IP network: TotalStorage Productivity Center for Fabric requests information over the IP network from a switch using SNMP queries on the device. The device returns the information toTotalStorage Productivity Center for Fabric, also over the IP network. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. In inband discovery, both the IP and Fibre Channel networks are used: TotalStorage Productivity Center for Fabric requests information (via the IP network) from a Tivoli SAN Manager agent installed on a Managed Host. That agent requests information over the Fibre Channel network from fabric elements and end points in the Fibre Channel network. The agent returns the information to TotalStorage Productivity Center for Fabric over the IP network. TotalStorage Productivity Center for Fabric collects, co-relates and displays information from all devices in the storage network, using both the IP network and the Fibre Channel network. If the Fibre Channel network is unavailable for any reason, monitoring can still continue over the IP network. TotalStorage Productivity Center for Fabric benefits TotalStorage Productivity Center for Fabric discovers the SAN infrastructure, and monitors the status of all the discovered components. Through Tivoli NetView, the administrator can provide reports on faults on components (either individually or in groups, or “smartsets”, of components). This will help them increase data availability for applications so the company can either be more efficient, or maximize the opportunity to produce revenue. TotalStorage Productivity Center for Fabric helps the storage administrator: Prevent faults in the SAN infrastructure through reporting and proactive maintenance, and Identify and resolve problems in the storage infrastructure quickly, when a problem Supported devices for TotalStorage Productivity Center for Fabric Provide fault isolation of SAN links. For more information about the TotalStorage Productivity Center for Fabric, refer to IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848. 1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk The Disk subject matter expert’s job allows you to manage the disk systems. It will discover and classify all disk systems that exist and draw a picture of all discovered disk systems. The Disk subject matter expert provides the ability to monitor, configure, create disks and do LUN masking of disks. It also does performance trending and performance threshold I/O analysis for both real disks and virtual disks. It also does automated status and problem alerts via SNMP. This role is filled by the TotalStorage Productivity Center for Disk (formerly the IBM TotalStorage Multiple Device Manager Performance Manager component). The requirements addressed by the Disk subject matter expert are shown in Figure 1-7 on page 13. The disk systems monitoring and configuration needs must be covered by a comprehensive management tool like the TotalStorage Productivity Center for Disk. 12 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 33. Figure 1-7 Monitor and configure the Storage Infrastructure Disk area The TotalStorage Productivity Center for Disk provides the raw capabilities of initiating and scheduling performance data collection on the supported devices, of storing the received performance statistics into database tables for later use, and of analyzing the stored data and generating reports for various metrics of the monitored devices. In conjunction with data collection, the TotalStorage Productivity Center for Disk is responsible for managing and monitoring the performance of the supported storage devices. This includes the ability to configure performance thresholds for the devices based on performance metrics, the generation of alerts when these thresholds are exceeded, the collection and maintenance of historical performance data, and the creation of gauges, or performance reports, for the various metrics to display the collected historical data to the end user. The TotalStorage Productivity Center for Disk enables you to perform sophisticated performance analysis for the supported storage devices. Functions TotalStorage Productivity Center for Disk provides the following functions: Collect data from devices The Productivity Center for Disk collects data from the IBM TotalStorage Enterprise Storage Server (ESS), SAN Volume Controller (SVC), DS400 family and SMI-S enabled devices. Each Performance Collector collects performance data from one or more storage groups, all of the same device type (for example, ESS or SAN Volume Controller). Each Performance Collection has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2® database tables. Configure performance thresholds You can use the Productivity Center for Disk to set performance thresholds for each device type. Setting thresholds for certain criteria enables Productivity Center for Disk to notify you when a certain threshold has been exceeded, so that you to take action before a critical event occurs. Chapter 1. IBM TotalStorage Productivity Center overview 13
  • 34. You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. Monitor performance metrics across storage subsystems from a single console Receive timely alerts to enable event action based on customer policies View performance data from the Productivity Center for Disk database You can view performance data from the Productivity Center for Disk database in both graphical and tabular forms. The Productivity Center for Disk allows a TotalStorage Productivity Center user to access recent performance data in terms of a series of values of one or more metrics, associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name, and once defined, a gauge can be “started”, which means it is then displayed in a separate window of the TotalStorage Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition will be accomplished through a wizard, to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data and returns it to you. Once started, a gauge is displayed in its own window, and displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge widow is displayed. Focus on storage optimization through identification of best LUN The Volume Performance Advisor is an automated tool to help the storage administrator pick the best possible placement of a new LUN to be allocated, that is, the best placement from a performance perspective. It also uses the historical performance statistics collected from the supported devices, to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables which are user controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function, this is so that when a new LUN is added, for example, to the ESS, the Performance Manager can seamlessly select the best possible LUN. For detailed information about how to use the functions of the TotalStorage Productivity Center for Disk refer to Chapter 11, “Using TotalStorage Productivity Center for Disk” on page 375. 1.3.4 Replication subject matter expert: Productivity Center for Replication The Replication subject matter expert’s job is to provide a single point of control for all replication activities. This role is filled by the TotalStorage Productivity Center for Replication. Given a set of source volumes to be replicated, the Productivity Center for Replication will find the appropriate targets, perform all the configuration actions required, and ensure the source and target volumes relationships are set up. Given a set of source volumes that represent an application, the Productivity Center for Replication will group these in a consistency group, give that consistency group a name, and allow you to start replication on the application. 14 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 35. Productivity Center for Replication will start up all replication pairs and monitor them to completion. If any of the replication pairs fail, meaning the application is out of sync, the Productivity Center for Replication will suspend them until the problem is resolved, resync them and resume the replication. The Productivity Center for Replication provides complete management of the replication process. The requirements addressed by the Replication subject matter expert are shown Figure 1-8. Replication in a complex environment needs to be addressed by a comprehensive management tool like the TotalStorage Productivity Center for Replication. Figure 1-8 Monitor and Configure the Storage Infrastructure Replication area Functions Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. Replication Manager administers and configures the copy services functions and monitors the replication actions. Its capabilities consist of the management of two types of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy®). At this time TotalStorage Productivity Center for Replication supports the IBM TotalStorage ESS. Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary primitive operations. Productivity Center for Replication also supports the session concept, such that multiple pairs are handled as a consistent unit, and that Freeze-and-Go functions can be performed when errors in mirroring occur. Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale customer environments. Chapter 1. IBM TotalStorage Productivity Center overview 15
  • 36. Productivity Center for Replication provides a user interface for creating, maintaining, and using volume groups and for scheduling copy tasks. The User Interface populates lists of volumes using the Device Manager interface. Some of the tasks you can perform with Productivity Center for Replication are: Create a replication group. A replication group is a collection of volumes grouped together so that they can be managed concurrently. Set up a Group for replication. Create, save, and name a replication task. Schedule a replication session with the user interface: – Create Session Wizard. – Select Source Group. – Select Copy Type. – Select Target Pool. – Save Session. Start a replication session A user can also perform these tasks with the Productivity Center for Replication command-line interface. For more information about the Productivity Center for Replication functions refer to Chapter 15, “Using TotalStorage Productivity Center for Replication” on page 827. 1.4 IBM TotalStorage Productivity Center All the subject matter experts, for Data, Fabric, Disk, and Replication are components of the IBM TotalStorage Productivity Center. The IBM TotalStorage Productivity Center is the first offering to be delivered as part of the IBM TotalStorage Open Software Family. The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to on demand storage needs. The IBM TotalStorage Productivity Center allows you to manage your storage infrastructure using existing storage management products — Productivity Center for Data, Productivity Center for Fabric, Productivity Center for Disk and Productivity Center for Replication — from one physical place. The IBM TotalStorage Productivity Center components can be launched from the IBM TotalStorage Productivity Center launch pad as shown in Figure 1-9 on page 17. 16 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 37. Figure 1-9 IBM TotalStorage Productivity Center Launch Pad The IBM TotalStorage Productivity Center establishes the foundation for IBM’s e-business On Demand technology. We need the function in an On Demand environment to provide IT resources On Demand - when the resources are needed by an application to support the customers business process. Of course, we are able to provide resources or remove resources today but the question is how. The process is expensive and time consuming. The IBM TotalStorage Productivity Center is the basis for the provisioning of storage resources to make the e-business On Demand environment a reality. In the future there will be more automation required to handle the hugh amount work in the provisioning area, more automation like the BM TotalStorage Productivity Center launch pad provides. Automation means workflow. Workflow is the key to getting work automated. IBM has a long history and investment in building workflow engines and work flows. Today IBM is using the IBM Tivoli Intelligent Orchestrator and IBM Tivoli Provisioning Manager to satisfy the resource requests in the e-business on demand™ environment in the server arena. The IBM Tivoli Intelligent Orchestrator and The IBM Tivoli Provisioning Manager provide the provisioning in the e-business On Demand environment. 1.4.1 Productivity Center for Disk and Productivity Center for Replication The Productivity Center for Disk and Productivity Center for Replication is software that has been designed to enable administrators to manage SANs and storage from a single console (Figure 1-10 on page 18). This software solution is designed specifically for managing networked storage components based on the SMI-S, including: IIBM TotalStorage SAN Volume Controller IBM TotalStorage Enterprise Storage Server (ESS) IBM TotalStorage Fibre Array Storage Technology (FAStT) IBM TotalStorage DS4000 series SMI enabled device Chapter 1. IBM TotalStorage Productivity Center overview 17
  • 38. Figure 1-10 Managing multiple devices Productivity Center for Disk and Productivity Center for Replication are built on IBM Director, a comprehensive server management solution. Using Director with the multiple device management solution enables administrators to consolidate the administration of IBM storage subsystems and provide advanced storage management functions (including replication and performance management) across multiple IBM storage subsystems. It interoperates with SAN Management and Enterprise System Resource Manager (ESRM) products from IBM, includingTotalStorage Productivity Center for Data and SAN Management products from other vendors. In a SAN environment, multiple devices work together to create a storage solution. The Productivity Center for Disk and Productivity Center for Replication provides integrated administration, optimization, and replication features for interacting SAN devices, including the SAN Volume Controller and DS4000 Family devices. It provides an integrated view of the underlying system so that administrators can drill down through the virtualized layers to easily perform complex configuration tasks and more productively manage the SAN infrastructure. Because the virtualization layers support advanced replication configurations, the Productivity Center for Disk and Productivity Center for Replication products offer features that simplify the configuration, monitoring, and control of disaster recovery and data migration solutions. In addition, specialized performance data collection, analysis, and optimization features are provided. As the SNIA standards mature, the Productivity Center view will be expanded to include CIM-enabled devices from other vendors, in addition to IBM storage. Figure 1-11 on page 19 provides an overview of Productivity Center for Disk and Productivity Center for Replication. 18 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 39. IBM TotalStorage Productivity Center Performance Replication Manager Manager Device Manager IBM Director WebSphere Application Server DB2 Figure 1-11 Productivity Center overview The Productivity Center for Disk and Productivity Center for Replication provides support for configuration, tuning, and replication of the virtualized SAN. As with the individual devices, the Productivity Center for Disk and Productivity Center for Replication layers are open and can be accessed via a GUI, CLI, or standards-based Web Services. Productivity Center for Disk and Productivity Center for Replication provide the following functions: Device Manager - Common function provided when you install the base prerequisite products for either Productivity Center for Disk or Productivity Center for Replication Performance Manager - provided by Productivity Center for Disk Replication Manager - provided by Productivity Center for Replication Device Manager The Device Manager is responsible for the discovery of supported devices; collecting asset, configuration, and availability data from the supported devices; and providing a limited topography view of the storage usage relationships between those devices. The Device Manager builds on the IBM Director discovery infrastructure. Discovery of storage devices adheres to the SNIA SMI-S specification standards. Device Manager uses the Service Level Protocol (SLP) to discover SMI-S enabled devices. The Device Manager creates managed objects to represent these discovered devices. The discovered managed objects are displayed as individual icons in the Group Contents pane of the IBM Director Console as shown in Figure 1-12 on page 20. Chapter 1. IBM TotalStorage Productivity Center overview 19
  • 40. Figure 1-12 IBM Director Console Device Manager provides a subset of configuration functions for the managed devices, primarily LUN allocation and assignment. Its function includes certain cross-device configuration, as well as the ability to show and traverse inter-device relationships. These services communicate with the CIM Agents that are associated with the particular devices to perform the required configuration. Devices that are not SMI-S compliant are not supported. The Device Manager also interacts and provides some SAN management functionality when IBM Tivoli SAN Manager is installed. The Device Manager health monitoring keeps you aware of hardware status changes in the discovered storage devices. You can drill down to the status of the hardware device, if applicable. This enables you to understand which components of a device are malfunctioning and causing an error status for the device. SAN Management When a supported SAN Manager is installed and configured, the Device Manager leverages the SAN Manager to provide enhanced function. Along with basic device configuration functions such as LUN creation, allocation, assignment, and deletion for single and multiple devices, basic SAN management functions such as LUN discovery, allocation, and zoning are provided in one step. IBM TotalStorage Productivity Center for Fabric (formerly IBM Tivoli SAN Manager) is currently the supported SAN Manager. The set of SAN Manager functions that will be exploited are: The ability to retrieve the SAN topology information, including switches, hosts, ports, and storage devices The ability to retrieve and to modify the zoning configuration on the SAN The ability to register for event notification, to ensure that Productivity Center for Disk is aware when the topology or zoning changes as new devices are discovered by the SAN Manager, and when hosts' LUN configurations change 20 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 41. Performance Manager function The Performance Manager function provides the raw capabilities of initiating and scheduling performance data collection on the supported devices, of storing the received performance statistics into database tables for later use, and of analyzing the stored data and generating reports for various metrics of the monitored devices. In conjunction with data collection, the Performance Manager is responsible for managing and monitoring the performance of the supported storage devices. This includes the ability to configure performance thresholds for the devices based on performance metrics, the generation of alerts when these thresholds are exceeded, the collection and maintenance of historical performance data, and the creation of gauges, or performance reports, for the various metrics to display the collected historical data to the end user. The Performance Manager enables you to perform sophisticated performance analysis for the supported storage devices. Functions Collect data from devices The Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage DS4000 series, IBM TotalStorage DS6000 and IBM TotalStorage DS8000 series and SMI-S enabled devices. The performance collection task collects performance data from one or more storage groups, all of the same device type (for example, ESS or SVC). Each performance collection task has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2 database tables. Configure performance thresholds You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria enables Performance Manager to notify you when a certain threshold has been exceeded, so that you can take action before a critical event occurs. You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. The eligible metrics for threshold checking are fixed for each storage device. If the threshold metrics are modified by the user, the modifications are accepted immediately and applied to checking being performed by active performance collection tasks. Examples of threshold metrics include: Disk utilization value Average cache hold time Percent of sequential I/Os I/O rate NVS full value Virtual disk I/O rate Managed disk I/O rate There is a user interface that supports threshold settings, enabling a user to: Modify a threshold property for a set of devices of like type. Modify a threshold property for a single device. – Reset a threshold property to the IBM-recommended value (if defined) for a set of devices of like type. IBM-recommended critical and warning values will be provided for all thresholds known to indicate potential performance problems for IBM storage devices. Chapter 1. IBM TotalStorage Productivity Center overview 21
  • 42. – Reset a threshold property to the IBM-recommended value (if defined) for a single device. Show a summary of threshold properties for all of the devices of like type. View performance data from the Performance Manager database. Gauges The Performance Manager supports a performance-type gauge. The performance-type gauge presents sample-level performance data. The frequency at which performance data is sampled on a device depends on the sampling frequency that you specify when you define the performance collection task. The maximum and minimum values of the sampling frequency depend on the device type. The static display presents historical data over time. The refreshable display presents near real-time data from a device that is currently collecting performance data. The Performance Manager enables a Productivity Center for Disk user to access recent performance data in terms of a series of values of one or more metrics associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name and, when defined, a gauge can be started, which means that it is then displayed in a separate window of the Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition is accomplished through a wizard to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data, and returns it to you. When started, a gauge is displayed in its own window, and it displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge window is displayed. For performance-type gauges, if a metric selected for display is associated with a threshold enabled for checking, the current threshold properties are also displayed in the gauge window and are updated each time the gauge data is refreshed. Database services for managing the collected performance data The performance data collected from the supported devices is stored in a DB2 database. Database services are provided that enable you to manage the potential volumes of data. Database purge function A database purge function deletes older performance data samples and, optionally, the associated exception data. Flexibility is built into the purge function, and it enables you to specify the data to purge, allowing important data to be maintained for trend purposes. You can specify to purge all of the sample data from all types of devices older than a specified number of days. You can specify to purge the data associated with a particular type of device. If threshold checking was enabled at the time of data collection, you can exclude data that exceeded at least one threshold value from being purged. You can specify the number of days that data is to remain in the database before being purged. Sample data and, optionally, exception data older than the specified number of days will be purged. A reorganization function is performed on the database tables after the sample data is deleted from the respective database tables. 22 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 43. Database information function Due to the amount of data collected by the Performance Manager function provided by Productivity Center for Disk, the database should be monitored to prevent it from running out of space. The database information function returns the database % full. This function can be invoked from either the Web user interface or the CLI. Volume Performance Advisor The advanced performance analysis provided by Productivity Center for Disk is intended to address the challenge of allocating more storage in a storage system so that the users of the newly allocated storage achieve the best possible performance. The Volume Performance Advisor is an automated tool that helps the storage administrator pick the best possible placement of a new LUN to be allocated (that is, the best placement from a performance perspective). It also uses the historical performance statistics collected from the supported devices to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables that are user-controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function so that, for example, when a new LUN is added to the ESS, the Device Manager can seamlessly select the best possible LUN. Replication Manager function Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. Productivity Center for Replication administers and configures the copy services functions and monitors the replication actions. Its capabilities consist of the management of two types of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). Currently replication functions are provided for the IBM TotalStorage ESS. Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary primitive operations. Multiple pairs are handled as a consistent unit, Freeze-and-Go functions can be performed when errors in mirroring occur. Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale customer environments. Productivity Center for Replication is controlled by applying predefined policies to Groups and Pools, which are groupings of LUNs that are managed by the Replication Manager. It provides the ability to copy a Group to a Pool, in which case it creates valid mappings for source and target volumes and optionally presents them to the user for verification that the mapping is acceptable. In this case, it manages Pool membership by removing target volumes from the pool when they are used, and by returning them to the pool only if the target is specified as being discarded when it is deleted. 1.4.2 Event services At the heart of any systems management solution is the ability to alert the system administrator in the event of a system problem. IBM Director provides a method of alerting called Event Action Plans, which enables the definition of event triggers independently from actions that might be taken. Chapter 1. IBM TotalStorage Productivity Center overview 23
  • 44. An event is an occurrence of a predefined condition relating to a specific managed object that identifies a change in a system process or a device. The notification of that change can be generated and tracked (for example, notification that a Productivity Center component is not available). Productivity Center for Disk and Productivity Center for Replication take full advantage of, and build upon, the IBM Director Event Services. The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment. Director Event Management encompasses the following concepts: Events can be generated by any managed object. IBM Director receives such events and calls appropriate internal event handlers that have been registered. Actions are user-configured steps to be taken for a particular event or type of event. There can be zero or more actions associated with a particular action plan. System administrators can create their own actions by customizing particular predefined actions. Event Filters are a set of characteristics or criteria that determine whether an incoming event should be acted on. Event Action Plans are associations of one or more event filters with one or more actions. Event Action Plans become active when you apply them to a system or a group of systems. The IBM Director Console includes an extensive set of GUI panels, called the Event Action Plan Builder, that enable the user to create action plans and event filters. Event Filters can be configured using the Event Action Plan Builder and set up with a variety of criteria, such as event types, event severities, day and time of event occurrence, and event categories. This allows control over exactly what action plans are invoked for each specific event. Productivity Center provides extensions to the IBM Director event management support. It takes full advantage of the IBM Director built-in support for event logging and viewing. It generates events that will be externalized. Action plans can be created based on filter criteria for these events. The default action plan is to log all events in the event log. It creates additional event families, and event types within those families, that will be listed in the Event Action Plan Builder. Event actions that enable Productivity Center functions to be exploited from within action plans will be provided. An example is the action to indicate the amount of historical data to be kept. 1.5 Taking steps toward an On Demand environment So what is an On Demand operating environment? It is not a specific set of hardware and software. Rather, it is an environment that supports the needs of the business, allowing it to become and remain responsive, variable, focused, and resilient. An On Demand operating environment unlocks the value within the IT infrastructure to be applied to solving business problems. It is an integrated platform, based on open standards, to enable rapid deployment and integration of business applications and processes. Combined with an environment that allows true virtualization and automation of the infrastructure, it enables delivery of IT capability On Demand. 24 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 45. An On Demand operating environment must be: Flexible Self-managing Scalable Economical Resilient Based on open standards The move to an On Demand storage environment is an evolving one, it does not happen all at once. There are several next steps that you may take to move to the On Demand environment. Constant changes to the storage infrastructure (upgrading or changing hardware for example) can be addressed by virtualization which provides flexibility by hiding the hardware and software from users and applications. Empower administrators with automated tools for managing heterogeneous storage infrastructures. and eliminate human error. Control storage growth with automated identification and movement of low-activity or inactive data to a hierarchy of lower-cost storage. Manage cost associated with capturing point-in-time copies of important data for regulatory or bookkeeping requirements by maintaining this inactive data in a hierarchy of lower-cost storage. Ensure recoverability through the automated creation, tracking and vaulting of reliable recovery points for all enterprise data. The ultimate goal to eliminate human errors by preparing for Infrastructure Orchestration software that can be used to automate workflows. No matter which steps you take to an On Demand environment there will be results. The results will be improved application availability, optimized storage resource utilization, and enhanced storage personnel productivity. Chapter 1. IBM TotalStorage Productivity Center overview 25
  • 46. 26 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 47. 2 Chapter 2. Key concepts There are certain industry standards and protocols that are the basis of the IBM TotalStorage Productivity Center. The understanding of these concepts is important for installing and customizing the IBM TotalStorage Productivity Center. In this chapter, we describe the standards on which the IBM TotalStorage Productivity Center is built, as well as the methods of communication used to discover and manage storage devices. We also discuss communication between the various components of the IBM TotalStorage Productivity Center. To help you understand these concepts, we provide diagrams to show the relationship and interaction of the various elements in the IBM TotalStorage Productivity Center environment. © Copyright IBM Corp. 2005. All rights reserved. 27
  • 48. 2.1 IBM TotalStorage Productivity Center architecture This chapter provides an overview of the components and functions that are included in the IBM TotalStorage Productivity Center. 2.1.1 Architectural overview diagram The architectural overview diagram in Figure 2-1 helps to illustrate the governing ideas and building blocks of the product suite which makes up the IBM TotalStorage Productivity Center. It provides a logical overview of the main conceptual elements and relationships in the architecture, components, connections, users, and external systems. Figure 2-1 IBM TotalStorage Productivity Center architecture overview diagram IBM TotalStorage Productivity Center and Tivoli Provisioning Manager are presented as building blocks in the diagram. Both of the products are not a single application but a complex environment by themselves. The diagram also shows the different methods used to collect information from multiple systems to give an administrator the necessary views on the environment, for example: Software clients (agents) Standard interfaces and protocols (for example, Simple Network Management Protocol (SNMP), Common Information Model (CIM) Agent) Proprietary interfaces (for only a few devices) In addition to the central data collection, Productivity Center provides a single point of control for a storage administrator, even though each manager still comes with its own interface. A program called the Launchpad is provided to start the individual applications from a central dashboard. 28 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 49. The Tivoli Provisioning Manager relies on Productivity Center to make provisioning possible. 2.1.2 Architectural layers The IBM TotalStorage Productivity Center architecture can be broken up in three layers as shown in Figure 2-2. Layer one represents a high level overview. There is only IBM TotalStorage Productivity Center instance in the environment. Layers two and three drill down into the TotalStorage Productivity Center environment so you can see the managers and the prerequisite components. Figure 2-2 Architectural layers Layer two consists of the individual components that are part of the product suite: IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Fabric IBM TotalStorage Productivity Center for Data Throughout this redbook, these products are referred to as managers or components. Layer three includes all the prerequisite components, for example IBM DB2, IBM WebSphere, IBM Director, IBM Tivoli NetView, and Tivoli Common Agent Services. IBM TotalStorage Productivity Center for Fabric can be installed on a full version of WebSphere Application Server or on the embedded WebSphere Application Server, which is shipped with Productivity Center for Fabric. Installation on a full version of WebSphere Application Server is used when other components of TotalStorage Productivity Center are installed on the same logical server. IBM TotalStorage Productivity Center for Fabric can utilize an existing IBM Tivoli Netview installation or can be installed along with it. Note: Each of the manager and prerequisite components can be drilled down even further, but in this book we go into this detail only where necessary. The only exception is Tivoli Common Agent Services, which is a new underlying service in the Tivoli product family. Terms and definitions When you look at the diagram in Figure 2-2, you see that each layer has a different name. The following sections explain each of these names as well as other terms commonly used in this book. Chapter 2. Key concepts 29
  • 50. Product A product is something that is available to be ordered. The individual products that are included in IBM TotalStorage Productivity Center are introduced in Chapter 1, “IBM TotalStorage Productivity Center overview” on page 3. Components Products (licensed software packages) and prerequisite software applications are in general called components. Some of the components are internal, meaning that, from the installation and configuration point of view, they are somewhat transparent. External components have to be separately installed. We usually use the term components for the following applications: IBM Director (external, used by Disk and Replication Manager) IBM DB2 (external, used by all managers) IBM WebSphere Application Server (external, used by Disk and Replication Manager, used by Fabric Manager if installed on the same logical server) Embedded WebSphere Application Server (internal, used by Fabric Manager) Tivoli NetView (internal, used by Fabric Manager) Tivoli Common Agent Services (external, used by Data and Fabric Manager) Not all of the internal components are always shown in the diagrams and lists in this book. The term subcomponent is used to emphasize that a certain component (the subcomponent) belongs to or is used by another component. For example, a Resource Manager is a subcomponent of the Fabric or Data Manager. Managers The managers are the central components of the IBM TotalStorage Productivity Center environment. They may share some of the prerequisite components. For example, IBM DB2 and IBM WebSphere are used by different managers. In this book, we sometimes use the following terms: Disk Manager for Productivity Center for Disk Replication Manager for Productivity Center for Replication Data Manager for Productivity Center for Data Fabric Manager for Productivity Center for Fabric In addition, we use the term manager for the Agent Manager for Tivoli Agent Manager component, because the name of the component already includes that term. Agents The agents are not shown in the diagram in Figure 2-2 on page 29, but they have an important role in the IBM TotalStorage Productivity Center environment. There are two types of agents: Common Information Model (CIM) Agents and agents that belong to one of the managers: CIM Agents: Agents that offer a CIM interface for management applications, for example, for IBM TotalStorage DS8000 and DS6000 series storage systems, IBM TotalStorage Enterprise Storage Server (ESS), SAN (Storage Area Network) Volume Controller, and DS4000 Storage Systems formerly known as FAStT (Fibre Array Storage Technology) Storage Systems Agents that belong to one of the managers: – Data Agents: Agents to collect data for the Data Manager – Fabric Agents: Agents that are used by the Fabric Manager for inband SAN data discovery and collection 30 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 51. In addition to these agents, the Service Location Protocol (SLP) also use the term agent for these components: User Agent Service Agent Directory Agent Elements We use the generic term element whenever we do not differentiate between components and managers. 2.1.3 Relationships between the managers and components An IBM TotalStorage Productivity Center environment includes many elements and is complex. This section tries to explain how all the elements work together to form a center for storage administration. Figure 2-3 shows the communication between the elements and how they relate to each other. Each gray box in the diagram represents one machine. The dotted line within a machine separates two distinct managers of the IBM TotalStorage Productivity Center. Figure 2-3 Manager and component relationship diagram All these components can also run on one machine. In this case all managers and IBM Director will share the same DB2 installation and all managers and IBM Tivoli Agent Manager will share the same WebSphere installation. Chapter 2. Key concepts 31
  • 52. 2.1.4 Collecting data Multiple methods are used within the different components to collect data from the devices in your environment. In this version of the product, the information is stored in different databases (see Table 3-6 on page 62) that are not shared between the individual components. Productivity Center for Disk and Productivity Center for Replication Productivity Center for Disk and Productivity Center for Replication use the Storage Management Initiative - Specification (SMI-S) standard (see “Storage Management Initiative - Specification” on page 35) to collect information about subsystems. For devices that are not CIM ready, this requires the installation of a proxy application (CIM Agent or CIM Object Manager (CIMOM)). It does not use its own agent such as the Data Manager and Fabric Manager. IBM TotalStorage Productivity Center for Fabric IBM TotalStorage Productivity Center for Fabric uses two methods to collect information: inband and outband discovery. You can use either method or you can use both at the same time to obtain the most complete picture of your environment. Using just one of the methods will give you incomplete information, but topology information will be available in both cases. Outband discovery is the process of discovering SAN information, including topology and device data, without using the Fibre Channel data paths. Outband discovery uses SNMP queries, invoked over IP network. Outband management and discovery is normally used to manage devices such as switches and hubs that support SNMP. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. Inband discovery uses the following general process: The Agent sends commands through its Host Bus Adapters (HBA) and the Fibre Channel network to gather information about the switches. The switch returns the information through the Fibre Channel network and the HBA to the Agent. The Agent queries the endpoint devices using RNID and SCSI protocols. The Agent returns the information to the Manager over the IP network. The Manager then responds to the new information by updating the database and redrawing the topology map if necessary. Internet SCSI (iSCSI) Discovery is an Internet Protocol (IP)-based storage networking standard for linking data storage. It was developed by the Internet Engineering Task Force (IETF). iSCSI can be used to transmit data over LANs and WANs. 32 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 53. The discovery paths are shown in parentheses in the diagram in Figure 2-4. Figure 2-4 Fabric Manager inband and outband discovery paths IBM TotalStorage Productivity Center for Data Within the IBM TotalStorage Productivity Center, the data manager is used to collect information about logical drives, file systems, individual files, database usage, and more. Agents are installed on the application servers and perform a regular scan to report back the information. To report on a subsystem level, a SMI-S interface is also built in. This information is correlated with the data that is gathered from the agents to show the LUNs that a host is using (an agent must be installed on that host). In contrast to Productivity Center for Disk and Productivity Center for Replication, the SMI-S interface in Productivity Center for Data is only used to retrieve information, but not to configure a device. Restriction: The SLP User Agent integrated into the Data Manager uses SLP Directory Agents and Service Agents to find services in the local subnet. To discover CIM Agents from remote networks, they have to be registered to either the Directory Agent or Service Agent, which is located in the local subnet unless routers are configured to also route multicast packets. You need to add each CIM Agent (that is not discovered) manually to the Data Manager; refer to “Configuring the CIM Agents” on page 290. Chapter 2. Key concepts 33
  • 54. 2.2 Standards used in IBM TotalStorage Productivity Center This section presents an overview of the standards that are used within IBM TotalStorage Productivity Center by the different components. SLP and CIM are described in detail since they are new concepts to many people that work with IBM TotalStorage Productivity Center and are important to understand. Vendor specific tools are available to manage devices in the SAN, but these proprietary interfaces are not used within IBM TotalStorage Productivity Center. The only exception is the application programming interface (API) that Brocade has made available to manage their Fibre Channel switches. This API is used within IBM TotalStorage Productivity Center for Fabric. 2.2.1 ANSI standards Several standards have been published for the inband management of storage devices, for example, SCSI Enclosure Services (SES). T11 committee Since the 1970s, the objective of the ANSI T11 committee is to define interface standards for high-performance and mass storage applications. Since that time, the committee has completed work on three projects: High-Performance Parallel Interface (HIPPI) Intelligent Peripheral Interface (IPI) Single-Byte Command Code Sets Connection (SBCON) Currently the group is working on Fibre Channel (FC) and Storage Network Management (SM) standards. Fibre Channel Generic Services The Fibre Channel Generic Services (FC-GS-3) Directory Service and the Management Service are being used within IBM TotalStorage Productivity Center for the SAN management. The availability and level of function depends on the implementation by the individual vendor. IBM TotalStorage Productivity Center for Fabric uses this standard. 2.2.2 Web-Based Enterprise Management Web-Based Enterprise Management (WBEM) is an initiative of the Distributed Management Task Force (DTMF) with the objective to enable the management of complex IT environments. It defines a set of management and Internet standard technologies to unify the management of complex IT environments. The three main conceptual elements of the WBEM initiative are: Common Information Model (CIM) CIM is a formal object-oriented modeling language that is used to describe the management aspects of systems. See also “Common Information Model” on page 47. xmlCIM This is a grammar to describe CIM declarations and messages used by the CIM protocol. Hypertext Transfer Protocol (HTTP) HTTP is used as a way to enable communication between a management application and a device that both use CIM. 34 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 55. The WBEM architecture defines the following elements: CIM Client The CIM Client is a management application similar to IBM TotalStorage Productivity Center that uses CIM to manage devices. A CIM Client can reside anywhere in the network, because it uses HTTP to talk to CIM Object Managers and Agents. CIM Managed Object A CIM Managed Object is a hardware or software component that can be managed by a management application using CIM. CIM Agent The CIM Agent is embedded into a device or it can be installed on the server using the CIM provider as the translator of device’s proprietary commands to CIM calls, and interfaces with the management application (the CIM Client). The CIM Agent is linked to one device. CIM Provider A CIM Provider is the element that translates CIM calls to the device-specific commands. It is like a device driver. A CIM Provider is always closely linked to a CIM Object Manager or CIM Agent. CIM Object Manager A CIM Object Manager (CIMOM) is a part of the CIM Server that links the CIM Client to the CIM Provider. It enables a single CIM Agent to talk to multiple devices. CIM Server A CIM Server is the software that runs the CIMOM and the CIM provider for a set of devices. This approach is used when the devices do not have an embedded CIM Agent. This term is often not used. Instead people often use the term CIMOM when they really mean the CIM Server. 2.2.3 Storage Networking Industry Association The Storage Networking Industry Association (SNIA) defines standards that are used within IBM TotalStorage Productivity Center. You can find more information on the Web at: http://www.snia.org Fibre Channel Common HBA API The Fibre Channel Common HBA API is used as a standard for inband storage management. It acts as a bridge between a SAN management application like Fabric Manager and the Fibre Channel Generic Services. IBM TotalStorage Productivity Center for Fabric Agent uses this standard. Storage Management Initiative - Specification SNIA has fully adopted and enhanced the CIM for Storage Management in its SMI-S. SMI-S was launched in mid-2002 to create and develop a universal open interface for managing storage devices including storage networks. Chapter 2. Key concepts 35
  • 56. The idea behind SMI-S is to standardize the management interfaces so that management applications can use these and provide cross device management. This means that a newly introduced device can be immediately managed as it conforms to the standards. SMI-S extends CIM and WBEM with the following features: A single management transport Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMI-S. A complete, unified, and rigidly specified object model SMI-S defines profiles and recipes within the CIM that enables a management client to reliably use a component vendor’s implementation of the standard, such as the control of LUNs and zones in the context of a SAN. Consistent use of durable names As a storage network configuration evolves and is re-configured, key long-lived resources, such as disk volumes, must be uniquely and consistently identified over time. Rigorously documented client implementation considerations SMI-S provides client developers with vital information for traversing CIM classes within a device or subsystem and between devices and subsystems such that complex storage networking topologies can be successfully mapped and reliably controlled. An automated discovery system SMI-S compliant products, when introduced in a SAN environment, automatically announce their presence and capabilities to other constituents using SLP (see 2.3.1, “SLP architecture” on page 38). Resource locking SMI-S compliant management applications from multiple vendors can exist in the same storage device or SAN and cooperatively share resources through a lock manager. The models and protocols in the SMI-S implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA also provides interoperability tests which help vendors to test their applications and devices if they conform to the standard. Managers or components that use this standard include: IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Data 2.2.4 Simple Network Management Protocol The SNMP is an Internet Engineering Task Force (IETF) protocol for monitoring and managing systems and devices in a network. Functions supported by the SNMP protocol are the request and retrieval of data, the setting or writing of data, and traps that signal the occurrence of events. SNMP is a method that enables a management application to query information from a managed device. The managed device has software running that sends and receives the SNMP information. This software module is usually called the SNMP agent. 36 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 57. Device management An SNMP manager can read information from an SNMP agent to monitor a device. Therefore the device needs to be polled on an interval basis. The SNMP manager can also change the configuration of a device, by setting certain values to corresponding variables. Managers or components that use these standards include the IBM TotalStorage Productivity Center for Fabric. Traps A device can also be set up to send a notification to the SNMP manager (this is called a trap) to asynchronously inform this SNMP manager of a status change. Depending on the existing environment and organization, it is likely that your environment already has an SNMP management application in place. The managers or components that use this standard are: IBM TotalStorage Productivity Center for Fabric (sending and receiving of traps) IBM TotalStorage Productivity Center for Data can be set up to send traps, but does not receive traps IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication events can be sent as SNMP traps by utilizing the IBM Director infrastructure. Management Information Base SNMP use a hierarchical structured Management Information Base (MIB) to define the meaning and the type of a particular value. An MIB defines managed objects that describe the behavior of the SNMP entity, which can be anything from a IP router to a storage subsystem. The information is organized in a tree structure. Note: For more information about SNMP, refer to TCP/IP Tutorial and Technical Overview, GG24-3376. IBM TotalStorage Productivity Center for Data MIB file For users planning to use the IBM TotalStorage Productivity Center for Data SNMP trap alert notification capabilities, an SNMP MIB is included in the server installation. You can find the SNMP MIB in the file tivoli_install_directory/snmp/tivoliSRM.MIB. The MIB is provided for use by your SNMP management console software. Most SNMP management station products provide a program called an MIB compiler that can be used to import MIBs. This allows you to better view Productivity Center for Data generated SNMP traps from within your management console software. Refer to your management console software documentation for instructions on how to compile or import a third-party MIB. 2.2.5 Fibre Alliance MIB The Fibre Alliance has defined an MIB for the management of storage devices. The Fibre Alliance is presenting the MIB to the IETF standardization. The intention of putting together this MIB was to have one MIB that covers most (if not all) of the attributes of storage devices from multiple vendors. The idea was to have only one MIB that is loaded onto an SNMP manager, one MIB file for each component. However, this requires that all devices comply with that standard MIB, which is not always the case. Chapter 2. Key concepts 37
  • 58. Note: This MIB is not part of IBM TotalStorage Productivity Center. To learn more about Fibre Alliance and MIB, refer to the following Web sites: http://www.fibrealliance.org http://www.fibrealliance.org/fb/mib_intro.htm 2.3 Service Location Protocol (SLP) overview The SLP is an IETF standard, documented in Request for Comments (RFCs) 2165, 2608, 2609, 2610, and 2614. SLP provides a scalable framework for the discovery and selection of network services. SLP enables the discovery and selection of generic services, which can range in function from hardware services such as those for printers or fax machines, to software services such as those for file servers, e-mail servers, Web servers, databases, or any other possible services that are accessible through an IP network. Traditionally, to use a particular service, an end-user or client application needs to supply the host name or network IP address of that service. With SLP, however, the user or client no longer needs to know individual host names or IP addresses (for the most part). Instead, the user or client can search the network for the desired service type and an optional set of qualifying attributes. For example, a user can specify to search for all available printers that support PostScript, based on the given service type (printers), and the given attributes (PostScript). SLP searches the user’s network for any matching services and returns the discovered list to the user. 2.3.1 SLP architecture The SLP architecture includes three major components, a Service Agent (SA), a User Agent (UA), and a Directory Agent (DA). The SA and UA are required components in an SLP environment, where the SLP DA is optional. The SMI-S specification introduces SLP as the method for the management applications (the CIM clients) to locate managed objects. In SLP, an SA is used to report to UAs that a service that has been registered with the SA is available. The following sections describe each of these components. Service Agent (SA) The SLP SA is a component of the SLP architecture that works on behalf of one or more network services to broadcast the availability of those services by using broadcasts. The SA replies to external service requests using IP unicasts to provide the requested information about the registered services, if it is available. 38 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 59. The SA can run in the same process or in a different process as the service itself. In either case, the SA supports registration and de-registration requests for the service (as shown in the right part of Figure 2-5). The service registers itself with the SA during startup, and removes the registration for itself during shutdown. In addition, every service registration is associated with a life-span value, which specifies the time that the registration will be active. In the left part of the diagram, you can see the interaction between a UA and the SA. Figure 2-5 SLP SA interactions (without SLP DA) A service is required to reregister itself periodically, before the life-span of its previous registration expires. This ensures that expired registration entries are not kept. For instance, if a service becomes inactive without removing the registration for itself, that old registration is removed automatically when its life span expires. The maximum life span of a registration is 65535 seconds (about 18 hours). User Agent (UA) The SLP UA is a process working on the behalf of the user to establish contact with some network service. The UA retrieves (or queries for) service information from the Service Agents or Directory Agents. The UA is a component of SLP that is closely associated with a client application or a user who is searching for the location of one or more services in the network. You can use the SLP UA by defining a service type that you want the SLP UA to locate. The SLP UA then retrieves a set of discovered services, including their service Uniform Resource Locator (URL) and any service attributes. You can then use the service’s URL to connect to the service. The SLP UA locates the registered services, based on a general description of the services that the user or client application has specified. This description usually consists of a service type, and any service attributes, which are matched against the service URLs registered in the SLP Service Agents. The SLP UA usually runs in the same process as the client application, although it is not necessary to do so. The SLP UA processes find requests by sending out multicast messages to the network and targeting all SLP SAs within the multicast range with a single User Datagram Protocol (UDP) message. The SLP UA can, therefore, discover these SAs with a minimum of network overhead. When an SA receives a service request, it compares its own registered services with the requested service type and any service attributes, if specified, and returns matches to the UA using a unicast reply message. Chapter 2. Key concepts 39
  • 60. The SLP UA follows the multicast convergence algorithm and sends repeated multicast messages until no new replies are received. The resulting set of discovered services, including their service URL and any service attributes, are returned to the client application or user. The client application or user is then responsible for contacting the individual services, as needed, using the service’s URL (see Figure 2-6). Figure 2-6 SLP UA interactions without SLP DA An SLP UA is not required to discover all matching services that exist in the network, but only enough of them to provide useful results. This restriction is mainly due to the transmission size limits for UDP packets. They can be exceeded when there are many registered services or when the registered services have lengthy URLs or a large number of attributes. However, in most modern SLP implementations, the UAs can recognize truncated service replies and establish TCP connections to retrieve all of the information of the registered services. With this type of UA and SA implementation, the only exposure that remains is when there are too many SAs within the multicast range. This can cut short the multicast convergence mechanism. This exposure can be mitigated by the SLP administrator by setting up one or more SLP DAs. Directory Agent The SLP DA is an optional component of SLP that collects and caches network service broadcasts. The DA is primarily used to simplify SLP administration and to improve SLP performance. You can consider the SLP DA as an intermediate tier in the SLP architecture. It is placed between the UAs and the SAs so that both UAs and SAs communicate only with the DA instead of with each other. This eliminates a large portion of the multicast request or reply traffic in the network. It also protects the SAs from being overwhelmed by too many service requests if there are many UAs in the environment. 40 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 61. Figure 2-7 shows the interactions of the SLP UAs and SAs in an environment with SLP DAs. Figure 2-7 SLP User Agent interactions with User Agent and Service Agent When SLP DAs are present, the behavior of both SAs and UAs changes significantly. When an SA is first initializing, it performs a DA discovery using a multicast service request. It also specifies the special, reserved service type service:directory-agent. This process is also called active DA discovery. It is achieved through the same mechanism as any other discovery using SLP. Similarly, in most cases, an SLP UA also performs active DA discovery using multicasting when it first starts. However, if the SLP UA is statically configured with one or more DA addresses, it uses those addresses instead. If it is aware of one or more DAs, either through static configuration or active discovery, it sends unicast service requests to those DAs instead of multicasting to SAs. The DA replies with unicast service replies, providing the requested service URLs and attributes. Figure 2-8 shows the interactions of UAs and SAs with DAs, during active DA discovery. Figure 2-8 SLP Directory Agent discovery interactions Chapter 2. Key concepts 41
  • 62. The SLP DA functions similarly to an SLP SA, receiving registration and deregistration requests, and responding to service requests with unicast service replies. There are a couple of differences, where DAs provide more functionality than SAs. One area, mentioned previously, is that DAs respond to service requests of the service:directory-agent service type with a DA advertisement response message, passing back a service URL containing the DA’s IP address. This allows SAs and UAs to perform active discovery on DAs. One other difference is that when a DA first initializes, it sends a multicast DA advertisement message to advertise its services to any existing SAs (and UAs) that may already be active in the network. UAs can optionally listen for, and SAs are required to listen for, such advertisement messages. This listening process is also sometimes called passive DA discovery. When the SA finds a new DA through passive DA discovery, it sends registration requests for all its currently registered services to that new DA. Figure 2-9 shows the interactions of DAs with SAs and UAs, during passive DA discovery. Figure 2-9 Service Location Protocol passive DA discovery Why use an SLP DA? The primary reason to use DAs is to reduce the amount of multicast traffic involved in service discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and timeouts are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. In networks without multicasting enabled, you can configure SLP to use broadcast. However, broadcast is inefficient, because it requires each host to process the message. Broadcast also does not normally propagate across routers. As a result, in a network without multicast, DAs can be deployed on multihomed hosts to bridge SLP advertisements between the subnets. 42 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 63. When to use DAs Use DAs in your enterprise when any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or timeouts during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. Your network does not have multicast enabled and consists of multiple subnets that must share services. SLP communication SLP uses three methods to send messages across an IP network: unicast, broadcast, or multicast. Data can be sent to one single destination (unicast) or to multiple destinations that are listening at the same time (multicast). The difference between a multicast and a broadcast is quite important. A broadcast addresses all stations in a network. Multicast messages are only used by those stations that are members of a multicast group (that have joined a multicast group). Unicast The most common communication method, unicast, requires that a sender of a message identifies one and only one target of that message. The target IP address is encoded within the message packet, and is used by the routers along the network path to route the packet to the proper destination. If a sender wants to send the same message to multiple recipients, then multiple messages must be generated and placed in the network, one message per recipient. When there are many potential recipients for a particular message, then this places an unnecessary strain on the network resources, since the same data is duplicated many times, where the only difference is the target IP address encoded within the messages. Broadcast In cases where the same message must be sent to many targets, broadcast is a much better choice than unicast, since it puts much less strain in the network. Broadcasting uses a special IP address, 255.255.255.255, which indicates that the message packet is intended to be sent to all nodes in a network. As a result, the sender of a message needs to generate only a single copy of that message, and can still transmit it to multiple recipients, that is to all members of the network. The routers multiplex the message packet, as it is sent along all possible routes in the network to reach all possible destinations. This puts much less strain on the network bandwidth, since only a single message stream enters the network, as opposed to one message stream per recipient. However, it puts much more strain on the individual nodes (and routers) in the network, since every node receives the message, even though most likely not every node is interested in the message. This means that those members of the network that were not the intended recipients, who receive the message anyway, must receive the unwanted message and discard it. Due to this inefficiency, in most network configurations, routers are configured to not forward any broadcast traffic. This means that any broadcast messages can only reach nodes on the same subnet as the sender. Multicast The ability of the SLP to automatically discover services that are available in the network, without a lot of setup or configuration, depends in a large part on the use of IP multicasting. IP multicasting is a broad subject in itself, and only a brief and simple overview is provided here. Chapter 2. Key concepts 43
  • 64. Multicasting can be thought of as more sophisticated broadcast, which aims to solve some of the inefficiencies inherent in the broadcasting mechanism. With multicasting, again the sender of a message has to generate only a single copy of the message, saving network bandwidth. However unlike broadcasting, with multicasting, not every member of the network receives the message. Only those members who have explicitly expressed an interest in the particular multicast stream receive the message. Multicasting introduces a concept called a multicast group, where each multicast group is associated with a specific IP address. A particular network node (host) can join one or more multicast groups, which notifies the associated router or routers that there is an interest in receiving multicast streams for those groups. When the sender, who does not necessarily have to be part of the same group, sends messages to a particular multicast group, that message is routed appropriately to only those subnets, which contain members of that multicast group. This avoids flooding the entire network with the message, as is the case for broadcast traffic. Multicast addresses The Internet Assigned Numbers Authority (IANA), which controls the assignment of IP addresses, has assigned the old Class D IP address range to be used for IP multicasting. Of this entire range, which extends from 224.0.0.0 to 239.255.255.255, the 224.0.0.* addresses are reserved for router management and communication. Some of the 224.0.1.* addresses are reserved for particular standardized multicast applications. Each of the remaining addresses corresponds to a particular general purpose multicast group. The Service Location Protocol uses address 239.255.255.253 for all its multicast traffic. The port number for SLP is 427, for both unicast and multicast. Configuration recommendations Ideally, after IBM TotalStorage Productivity Center is installed, it would discover all storage devices that it can physically reach over the IP network. However in most situations, this is not the case. This is primarily due to the previously mentioned limitations of multicasting and the fact that the majority of routers have multicasting disabled by default. As a result, in most cases without any additional configuration, IBM TotalStorage Productivity Center discovers only those storage devices that reside in its own subnet, but no more. The following sections provide some configuration recommendations to enable TotalStorage Productivity Center to discover a larger set of storage devices. Router configuration The vast majority of the intelligence that allows multicasting to work is implemented in the router operating system software. As a result, it is necessary to properly configure the routers in the network to allow multicasting to work effectively. Unfortunately, there is a dizzying array of protocols and algorithms which can be used to configure particular routers to enable multicasting. These are the most common ones: Internet Group Management Protocol (IGMP) is used to register individual hosts in particular multicast groups, and to query group membership on particular subnets. Distance Vector Multicast Routing Protocol (DVMRP) is a set of routing algorithms that use a technique called Reverse Path Forwarding to decide how multicast packets are to be routed in the network. Protocol-Independent Multicast (PIM) comes in two varieties: dense mode (PIM-DM) and sparse mode (PIM-SM). They are optimized to networks where either a large percentage of nodes require multicast traffic (dense), or a small percentage require the traffic (sparse). 44 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 65. Multicast Open Shortest Path First (MOSPF) is an extension of OSPF, a “link-state” unicast routing protocol that attempts to find the shortest path between any two networks or subnets to provide the most optimal routing of packets. The routers of interest are all those which are associated with subnets that contain one or more storage devices which are to be discovered and managed by TotalStorage Productivity Center. You can configure the routers in the network to enable multicasting in general, or at least to allow multicasting for the SLP multicast address, 239.255.255.253, and port, 427. This is the most generic solution and permits discovery to work the way that it was intended by the designers of SLP. To properly configure your routers for multicasting, refer to your router manufacturer’s reference and configuration documentation. Although older hardware may not support multicasting, all modern routers do. However, in most cases, multicast support is disabled by default, which means that multicast traffic is sent only among the nodes of a subnet but is not forwarded to other subnets. For SLP, this means that service discovery is limited to only those agents which reside in the same subnet. Firewall configuration In the case where one or more firewalls are used between TotalStorage Productivity Center and the storage devices that are to be managed, the firewalls need to be configured to pass traffic in both directions, as SLP communication is two way. This means that when TotalStorage Productivity Center, for example, queries an SLP DA that is behind a firewall for the registered services, the response will not use an already opened TCP/IP session but will establish another connection in the direction from the SLP DA to the TotalStorage Productivity Center. For this reason, port 427 should be opened in both directions, otherwise the response will not be received and TotalStorage Productivity Center will not recognize services offered by this SLP DA. SLP DA configuration If router configuration is not feasible, another technique is to use SLP DAs to circumvent the multicast limitations. Since with statically configured DAs, all service requests are unicast instead of multicast by the UA, it is possible to simply configure one DA for each subnet that contains storage devices which are to be discovered by TotalStorage Productivity Center. One DA is sufficient for each of such subnets, although more can be configured without harm, perhaps for reasons of fault tolerance. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow Productivity Center to discover all of the devices, you must statically configure it with the addresses of each of these DAs. You accomplish this using the IBM Director GUI’s Discovery Preference panel. From the MDM SLP Configuration tab, you can enter a list of DA addresses. As described previously, Productivity Center unicasts service requests to each of these statically configured DAs, but also multicasts service requests on the local subnet on which Productivity Center is installed. Figure 2-10 on page 46 displays a sample environment where DAs have been used to bridge the multicast gap between subnets in this manner. Note: At this time, you cannot set up IBM TotalStorage Productivity Center for Data to use remote DAs such as Productivity Center for Disk and Productivity Center for Replication. You need to define all remote CIM Agents by creating a new entry in the CIMOM Login panel or you can register remote services in DA which resides in local subnet. Refer to “Configuring the CIM Agents” on page 290 for detailed information. Chapter 2. Key concepts 45
  • 66. Figure 2-10 Recommended SLP configuration You can easily configure an SLP DA by changing the configuration of the SLP SA included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA instead. The procedure to perform this configuration is explained in 6.2, “SLP DA definition” on page 248. Note that the change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function as normal, sending registration and de-registration commands to the DA directly. SLP configuration with services outside local subnet SLA DA and SA can also be configured to cache CIM services information from non-local subnets. Usually CIM Agents or CIMOMs will have local SLP SA function. When there is a need to discover CIM services outside the local subnet and the network configuration does not permit the use of SLP DA in each of them (for example, firewall rules do not allow two way communication on port 427), remote services can be registered on the SLP DA in the local subnet. This configuration can be done by using slptool, which is part of SLP installation packages. Such registration is not persistent across system restarts. To achieve persistent registration of services outside of the local subnet, these services need to be defined in the registration file used by SLP DA at startup. Refer to 5.7.3, “Setting up the Service Location Protocol Directory Agent” on page 221 for information on setting up the slp.reg file. 46 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 67. 2.3.2 Common Information Model The CIM Agent provides a means by which a device can be managed by common building blocks rather than proprietary software. If a device is CIM-compliant, software that is also CIM-compliant can manage the device. Vendor applications can benefit from adopting the common information model because they can manage CIM-compliant devices in a common way, rather than using device-specific programming interfaces. Using CIM, you can perform tasks in a consistent manner across devices and vendors. CIM uses schemas as a kind of class library to define objects and methods. The schemas can be categorized into three types: Core schema: Defines classes and relationships of objects Common schema: Defines common components of systems Extension schema: Entry point for vendors to implement their own schema The CIM/WBEM architecture defines the following elements: Agent code or CIM Agent An open-systems standard that interprets CIM requests and responses as they transfer between the client application and the device. The Agent is embedded into a device, which can be hardware or software. CIM Object Manager The common conceptual framework for data management that receives, validates, and authenticates the CIM requests from the client application. It then directs the requests to the appropriate component or a device provider such as a CIM Agent. Client application or CIM Client A storage management program, such as TotalStorage Productivity Center, that initiates CIM requests to the CIM Agent for the device. A CIM Client can reside anywhere in the network, because it uses HTTP to talk to CIM Object Managers and Agents. Device or CIM Managed Object A Managed Object is a hardware or software component that can be managed by a management application by using CIM, for example, a IBM SAN Volume Controller. Device provider A device-specific handler that serves as a plug-in for the CIMOM. That is, the CIMOM uses the handler to interface with the device. Note: The terms CIM Agent and CIMOM are often used interchangeably. At this time, few devices come with an integrated CIM Agent. Most devices need a external CIMOM for CIM to enable management applications (CIM Clients) to talk to the device. For ease of installation, IBM provides an Integrated Configuration Agent Technology (ICAT), which is a bundle that includes the CIMOM, the device provider, and an SLP SA. Integrating legacy devices into the CIM model Since these standards are still evolving, we cannot expect that all devices will support the native CIM interface. Because of this, the SMI-S is introducing CIM Agents and CIM Object Managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMI-S. The agent is used for one device and an object manager for a set of devices. This type of operation is also called proxy model and is shown in Figure 2-11 on page 48. Chapter 2. Key concepts 47
  • 68. The CIM Agent or CIMOM translates a proprietary management interface to the CIM interface. The CIM Agent for the IBM TotalStorage ESS includes a CIMOM inside it. In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent as shown in the “Embedded Model” in Figure 2-11. When widely adopted, SMI-S will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible feature-poor interfaces into their products. Component developers will no longer have to push their unique interface functionality to application developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks. CIM Client Management Application 0..n CIMxml CIM operations over http [TCP/IP] Agent Object Manager 0..n Provider 0..n 1 Proprietary 1 1 n Proprietary Agent Device or Device or Subsystem Device or Subsystem 0..n Subsystem Proxy Model Embedded Model Proxy Model Figure 2-11 CIM Agent and Object Manager overview CIM Agent implementation When a CIM Agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. This interface enables IBM TotalStorage Productivity Center for Data, IBM TotalStorage Productivity Center for Disk, IBM TotalStorage Productivity Center for Replication, IBM Director, and vendor tools to manage the SAN infrastructure more effectively. By implementing a standard interface over all devices, an open environment is created in which tools from a variety of vendors can work together. This reduces the cost of developing integrated management applications, installing and configuring management applications, and managing the SAN infrastructure. Figure 2-12 on page 49 shows an overview of the CIM Agent. 48 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 69. Figure 2-12 CIM Agent overview The CIM Agent includes a CIMOM, which adapts various devices using a plug-in called a provider. The CIM Agent can work as a proxy or can be embedded in storage devices. When the CIM Agent is installed as a proxy, the IBM CIM Agent can be installed on the same server that supports the device user interface. CIM Object Manager The SNIA SMI-S standard designates that either a proxy or an embedded agent may be used to implement CIM. In each case, the CIM objects are supported by a CIM Object Manager. External applications communicate with CIM through HTTP to exchange XML messages that are used to configure and manage the device. In a proxy configuration, the CIMOM runs outside of the device and can manage multiple devices. In this case, a provider component is installed into the CIMOM to enable the CIMOM to manage specific devices such as the ESS or SAN Volume Controller. The providers adapt the CIMOM to work with different devices and subsystems. In this way, a single CIMOM installation can be used to access more than one device type and more than one device of each type on a subsystem. The CIMOM acts as a catcher for requests that are sent from storage management applications. The interactions between the catcher and sender use the language and models defined by the SMI-S standard. This enables storage management applications, regardless of vendor, to query status and perform command and control using XML-based CIM interactions. 2.4 Component interaction This section provides an overview of the interactions between the different components by using standardized management methods and protocols. 2.4.1 CIMOM discovery with SLP The SMI-S specification introduces SLP as the method for the management applications (the CIM clients) to locate managed objects. SLP is explained in more detail in 2.3, “Service Location Protocol (SLP) overview” on page 38. Figure 2-13 on page 50 shows the interaction between CIMOMs and SLP components. Chapter 2. Key concepts 49
  • 70. Lock CIM Client Directory Manager Management Manager Application SA SA UA 0..n DA 0..n 0..n SLP TCP/IP CIMxml CIM operations over http [TCP/IP] SA SA Agent Object Manager 0..n Provider 0..n 1 1 Proprietary SA 1 Proprietary n Agent Device or Device or Subsystem Device or Subsystem 0..n Subsystem Proxy Model Embedded Model Proxy Model Figure 2-13 SMI-S extensions to WBEM/CIM 2.4.2 How CIM Agent works The CIM Agent typically works as explained in the following sequence and as shown in Figure 2-14 on page 51: 1. The client application locates the CIMOM by calling an SLP directory service. 2. The CIMOM is invoked. 3. The CIMOM registers itself to the SLP and supplies its location, IP address, port number, and the type of service it provides. 4. With this information, the client application starts to directly communicate with the CIMOM. 5. The client application sends CIM requests to the CIMOM. As requests arrive, the CIMOM validates and authenticates each request. 6. The CIMOM directs the requests to the appropriate functional component of the CIMOM or to a device provider. 7. The provider makes calls to a device-unique programming interface on behalf of the CIMOM to satisfy client application requests. 8. — 10. The client application requests are made. 50 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 71. Figure 2-14 CIM Agent work flow 2.5 Tivoli Common Agent Services The Tivoli Common Agent Services is a new concept with the goal to provide a set of functions for the management of agents that will be common to all Tivoli products. At the time of this writing, IBM TotalStorage Productivity Center for Fabric and IBM TotalStorage Productivity Center for Data are the first applications that use this new concept. See Figure 2-15 on page 52 for an overview of the three elements in the Tivoli Common Agent Services infrastructure. In each of the planning and installation guides of the Productivity Center for Fabric and Productivity Center for Data, there is a chapter that provides information about the benefits, system requirements and sizing, security considerations, and the installation procedures. The Agent Manager is the central network element, that together with the distributed Common Agents, builds an infrastructure which is used by other applications to deploy and manage an agent environment. Each application uses a Resource Manager that is built into the application server (Productivity Center for Data or Productivity Center for Fabric) to integrate in this environment. Note: You can have multiple Resource Managers of the same type using a single Agent Manager. This may be necessary to scale the environment when, for example, one Data Manager cannot handle the load any more. The Agents will be managed by only one of the Data Managers as in this example. Chapter 2. Key concepts 51
  • 72. Figure 2-15 Tivoli Common Agent Services The Common Agent provides the platform for the application specific agents. Depending on the tasks for which a subagent is used, the Common Agent is installed on the customers’ application servers, desktop PCs, or notebooks. Note: In different documentation, Readme files, directory and file names, you also see the terms Common Endpoint, Endpoint, or simply EP. This always refers to the Common Agent, which is part of the Tivoli Common Agent Services. The Common Agent talks to the application specific subagent, with the Agent Manager and the Resource Manager, but the actual system level functions are invoked by the subagent. The information that the subagent collects is sent directly to the Resource Manager by using the application’s native protocol. This is enabled to have down-level agents in the same environment, as the new agents that are shipped with the IBM TotalStorage Productivity Center. Certificates are used to validate if a requester is allowed to establish a communication. Demo keys are supplied to quickly set up and configure a small environment, since every installation CD uses the same certificates, this is not secure. If you want to use Tivoli Common Agent Services in a production environment, we recommend that you use your own keys that can be created during the Tivoli Agent Manager installation. One of the most important certificates is stored in the agentTrust.jks file. The certificate can also be created during the installation of Tivoli Agent Manager. If you do not use the demo certificates, you need to have this file available during the installation of the Common Agent and the Resource Manager. This file is locked with a password (the agent registration password) to secure the access to the certificates. You can use the ikeyman utility in the javajre subdirectory to verify your password. 52 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 73. 2.5.1 Tivoli Agent Manager The Tivoli Agent Manager requires a database to store information in what is called the registry. Currently there are three options for installing the database: using IBM Cloudscape™ (provided on the installation CD), a local DB2 database, or a remote DB2 database. Since the registry does not contain much information, using the Cloudscape database is OK. In our setup described later in the book, we chose a local DB2 database, because the DB2 database was required for another component that was installed on the same machine. WebSphere Application Server is the second prerequisite for the Tivoli Agent Manager. This is installed if you use the Productivity Center Suite Installer or if you choose to use the Tivoli Agent Manager installer. We recommend that you do not install WebSphere Application Server manually. Three dedicated ports are used by the Agent Manager (9511-9513). Port 9511 is the most important port because you have to enter this port during the installation of a Resource Manager or Common Agent, if you choose to change the defaults. When the WebSphere Application Server is being installed, make sure that the Microsoft Internet Information Server (IIS) is not running, or even better that it is not installed. Port 80 is used by the Tivoli Agent Manager for the recovery of agents that can no longer communicate with the manager, because of lost passwords or certificates. This Agent Recovery Service is located by a DNS entry with the unqualified host name of TivoliAgentRecovery. Periodically, check the Agent Manager log for agents that are unable to communicate with the Agent Manager server. The recovery log is in the %WAS_INSTALL_ROOT%AgentManager logsSystemOut.log file. Use the information in the log file to determine why the agent could not register and then take corrective action. During the installation, you also have to specify the agent registration password and the Agent Registration Context Root. The password is stored in the AgentManager.properties file on the Tivoli Agent Manager. This password is also used to lock the agentTrust.jks certificate file. Important: A detailed description about how to change the password is available in the corresponding Resource Manager Planning and Installation Guide. Since this involves redistributing the agentTrust.jks files to all Common Agents, we encourage you to use your own certificates from the beginning. To control the access from the Resource Manager to the Common Agent, certificates are used to make sure that only an authorized Resource Manager can install and run code on a computer system. This certificate is stored in the agentTrust.jks and locked with the agent registration password. 2.5.2 Common Agent As mentioned earlier, the Common Agent is used as a platform for application specific agents. These agents sometimes are called subagents. The subagents can be installed using two different methods: Using an application specific installer From a central location once the Common Agent is installed Chapter 2. Key concepts 53
  • 74. When you install the software, the agent has to register with the Tivoli Agent Manager. During this procedure, you need to specify the registration port on the manager (by default 9511). Furthermore, you need to specify an agent registration password. This registration is performed by the Common Agent, which is installed automatically if not already installed. If the subagent is deployed from a central location, the port 9510 is by default used by the installer (running on the central machine), to communicate with the Common Agent to download and install the code. When this method is used, no password or certificate is required, because these were already provided during the Common Agent installation on the machine. If you choose to use your own certificate during the Tivoli Agent Manager installation, you need to supply it for the Common Agent installation. 54 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 75. Part 2 Part 2 Installing the IBM TotalStorage Productivity Center base product suite In this part of the book we provide information to help you successfully install the prerequisite products that are required before you can install the IBM TotalStorage Productivity Center product suite. This includes installing: DB2 IBM Director WebSphere Application Server Tivoli Agent Manager IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication © Copyright IBM Corp. 2005. All rights reserved. 55
  • 76. 56 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 77. 3 Chapter 3. Installation planning and considerations IBM TotalStorage Productivity Center is made up of several products which can be installed individually, as a complete suite, or any combination in between. By installing multiple products, a synergy is created which allows the products to interact with each other to provide a more complete solution to help you meet your business storage management objectives. This chapter contains information that you will need before beginning the installation. It also discusses the supported environments and pre-installation tasks. © Copyright IBM Corp. 2005. All rights reserved. 57
  • 78. 3.1 Configuration You can install the storage management components of IBM TotalStorage Productivity Center on a variety of platforms. However, for the IBM TotalStorage Productivity Center suite, when all four manager components are installed on the same system, the only common platforms for the managers are: Windows 2000 Server with Service Pack 4 Windows 2000 Advanced Server Windows 2003 Enterprise Edition Note: Refer to the following Web site for the updated support summaries, including specific software, hardware, and firmware levels supported: http://www.storage.ibm.com/software/index.html If you are using the storage provisioning workflows, you must install IBM TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and IBM TotalStorage Productivity Center for Fabric on the same machine. Because of processing requirements, we recommend that you install IBM Tivoli Provisioning Manager on a separate Windows machine. 3.2 Installation prerequisites This section lists the minimum prerequisites for installing IBM TotalStorage Productivity Center. Hardware The following hardware is required: Dual Pentium® 4 or Intel® Xeon 2.4 GHz or faster processors 4 GB of DRAM Network connectivity Subsystem Device Driver (SDD), for IBM TotalStorage Productivity Center for Fabric (optional) 5 GB available disk space. Database You must comply with the following database requirements: The installation of DB2 Version 8.2 is part of the Prerequisite Software Installer and is required by all the managers. Other databases that are supported are: – For IBM TotalStorage Productivity Center for Fabric: • IBM Cloudscape 5.1.60 (provided on the CD) – For IBM TotalStorage Productivity Center for Data: • Microsoft SQL Server Version 7.0, 2000 • Oracle 8i, 9i, 9i V2 • Sybase SQL Server (Adaptive Server Enterprise) Version 12.5 or higher • IBM Cloudscape 5.1.60 (provided on the CD) 58 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 79. 3.2.1 TCP/IP ports used by TotalStorage Productivity Center This section provides an overview of the TCP/IP ports used by IBM TotalStorage Productivity Center. TCP/IP ports used by Disk and Replication Manager The IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication Manager installation program preconfigures the TCP/IP ports used by WebSphere. Table 3-1 lists the values that correspond to the WebSphere ports. Table 3-1 TCP/IP ports for IBM TotalStorage Productivity Center for Disk and Replication Base Port value WebSphere ports 427 SLP port 2809 Bootstrap port 9080 HTTP Transport port 9443 HTTPS Transport port 9090 Administrative Console port 9043 Administrative Console Secure Server port 5559 JMS Server Direct Address port 5557 JMS Server Security port 5 5558 JMS Server Queued Address port 8980 SOAP Connector Address port 7873 DRS Client Address port TCP/IP ports used by Agent Manager The Agent Manager uses the TCP/IP ports listed in Table 3-2. Table 3-2 TCP/IP ports for Agent Manager Port value Usage 9511 Registering agents and resource managers 9512 Providing configuration updates Renewing and revoking certificates Querying the registry for agent information Requesting ID resets 9513 Requesting updates to the certificate revocation list Requesting Agent Manager information Downloading the truststore file 80 Agent recovery service Chapter 3. Installation planning and considerations 59
  • 80. TCP/IP ports used by IBM TotalStorage Productivity Center for Fabric The Fabric Manager uses the default TCP/IP ports listed in Table 3-3. Table 3-3 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric Port value Usage 8080 NetView Remote Web console 9550 HTTP port 9551 Reserved 9552 Reserved 9553 Cloudscape server port 9554 NVDAEMON port 9555 NVREQUESTER port 9556 SNMPTrapPort port on which to get events forwarded from Tivoli NetView 9557 Reserved 9558 Reserved 9559 Tivoli NetView Pager daemon 9560 Tivoli NetView Object Database daemon 9661 Tivoli NetView Topology Manager daemon 9562 Tivoli NetView Topology Manager socket 9563 Tivoli General Topology Manager 9564 Tivoli NetView OVs_PMD request services 9565 Tivoli NetView OVs_PMD management services 9565 Tivoli NetView trapd socket 9567 Tivoli NetView PMD service 9568 Tivoli NetView General Topology map service 9569 Tivoli NetView Object Database event socket 9570 Tivoli NetView Object Collection facility socket 9571 Tivoli NetView Web Server socket 9572 Tivoli NetView SnmpServer 60 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 81. Fabric Manager remote console TCP/IP default ports The Fabric Manager uses the ports in Table 3-4 for its remote console. Table 3-4 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric remote console Port value Usage 9560 HTTP port 9561 Reserved 9561 Reserved 9562 ASF Jakarta Tomcat’s Local Server port 9563 Tomcat’s warp port 9564 NVDAEMON port 9565 NVREQUESTER port 9569 Tivoli NetView Pager daemon 9570 Tivoli NetView Object Database daemon 9571 Tivoli NetView Topology Manager daemon 9572 Tivoli NetView Topology Manager socket 9573 Tivoli General Topology Manager 9574 Tivoli NetView OVs_PMD request services 9575 Tivoli NetView OVs_PMD management services 9576 Tivoli NetView trapd socket 9577 Tivoli NetView PMD service 9578 Tivoli NetView General Topology map service 9579 Tivoli NetView Object Database event socket 9580 Tivoli NetView Object Collection facility socket 9581 Tivoli NetView Web Server socket 9582 Tivoli NetView SnmpServer Fabric Agents TCP/IP ports The Fabric Agents use the TCP/IP ports listed in Table 3-5. Table 3-5 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric Agents Port value Usage 9510 Common agent 9514 Used to restart the agent 9515 Used to restart the agent Chapter 3. Installation planning and considerations 61
  • 82. 3.2.2 Default databases created during the installation During the installation of IBM TotalStorage Productivity Center, we recommend that you use DB2 as the preferred database type. Table 3-6 lists all the default databases that the installer creates during the installation. Table 3-6 Default DB2 databases Application Default database name (DB2) IBM Director No default; we created database, IBMDIR Tivoli Agent Manager IBMCDB IBM TotalStorage Productivity Center for Disk DMCOSERV and Replication Base IBM TotalStorage Productivity Center for Disk PMDATA IBM TotalStorage Productivity Center for ESSHWL Replication hardware subcomponent IBM TotalStorage Productivity Center for ELEMCAT Replication element catalog IBM TotalStorage Productivity Center for REPMGR Replication replication manager IBM TotalStorage Productivity Center for SVCHWL Replication SVC hardware subcomponent IBM TotalStorage Productivity Center for Fabric ITSANM IBM TotalStorage Productivity Center for Data No default; we created Database, TPCDATA 3.3 Our lab setup environment This section gives a brief overview of what our lab setup environment looked like and what we used to document the installation. Server hardware used We used four IBM Eserver xSeries servers with: 2 x 2.4 GHz CPU per system 4 GB Memory per system 73 GB HDD per system Windows 2000 with Service Pack 4 System 1 The name of our first system was Colorado. The following applications were installed on this system: DB2 IBM Director WebSphere Application Server WebSphere Application Server update Tivoli Agent Manager IBM TotalStorage Productivity Center for Disk and Replication Base IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication 62 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 83. IBM TotalStorage Productivity Center for Data IBM TotalStorage Productivity Center for Fabric System 2 The name of our second system was Gallium. The following applications were installed on this server: Data Agent System 3 The name of our third system was PQDISRV. The following applications were installed on this server: DB2 Application software Systems used for CIMOM servers We used four xSeries servers for our Common Information Model Object Manager (CIMOM) servers. They consisted of: 2 GHz CPU per system 2 GB Memory per system 40 GB HDD per system Windows 2000 server with Service Pack 4 CIMOM system 1 Our first CIMOM server was named TPCMAN. On this server, we installed ESS CLI ESS CIMOM LSI Provider (FAStT CIMOM) CIMOM system 3 Our third CIMOM system was named SVCCON. We installed the following applications on this server: SAN Volume Controller (SVC) Console SVC CIMOM Networking We used the following switches for networking: IBM Ethernet 10/100 24 Port switch 2109 F16 Fiber switch Storage devices We employed the following storage devices: IBM TotalStorage Enterprise Storage Server (ESS) 800 and F20 DS8000 DS6000 DS4000 IBM SVC Figure 3-1 on page 64 shows a diagram of our lab setup environment. Chapter 3. Installation planning and considerations 63
  • 84. . ESS Management SVCCCONN MARYLAMB TPCMAN XXX.YYY.6.29 Console SVC CIMOM ESS CIMOM FAStT CIMOM XXX.YYY.6.26 W2K W2K W2K W2K SVC Cluster XXX.YYY.140.14 XXX.YYY.140.15 XXX.YYY.ZZZ.25 XXX.YYY.ZZZ.34 XXX.YYY.ZZZ.35 XXX.YYY.ZZZ.73 Ethernet Switch XXX.YYY.ZZZ.10 Colorado Server Gallium Server W2K W2K PQDISRV Server Faroe Server W2K W2K -> IBM TotalStorage Productivity Center for -> Tivoli Agent Manager Disk, and Replication -> Application Server ->Application Server -> IBM TotalStorage Productivity Center for -> IBM TotalStorage Productivity Fabric Center for Data XXX.YYY.ZZZ.49 XXX.YYY.ZZZ.36 XXX.YYY.ZZZ.100 XXX.YYY.ZZZ.69 2109-F16 Fiber Switch XXX.YYY.ZZZ.201 FAStT 700 XXX.YYY.ZZZ.202 XXX.YYY.ZZZ.203 Figure 3-1 Lab setup environment 3.4 Pre-installation check list You need to complete the following tasks in preparation for installing the IBM TotalStorage Productivity Center. Print the tables in Appendix A, “Worksheets” on page 991, to keep track of the information you will need during the installation, such as user names, ports, IP addresses, and locations of servers and managed devices. 1. Determine which elements of the TotalStorage Productivity Center you will install. 2. Uninstall Internet Information Services. 3. Grant the following privileges to the user account that will be used to install the TotalStorage Productivity Center: – Act as part of the operating system – Create a token object – Increase quotas – Replace a process-level token – Logon as a service 4. Install and configure Simple Network Management Protocol (SNMP) (Fabric requirement). 5. Identify any firewalls and obtain the required authorization. 6. Obtain the static IP addresses that will be used for the TotalStorage Productivity Center servers. 64 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 85. 3.5 User IDs and security This section discusses the user IDs that are used during the installation and those that are used to manage and work with TotalStorage Productivity Center. It also explains how you can increase the basic security of the different components. 3.5.1 User IDs This section lists and explains the user IDs used in a IBM TotalStorage Productivity Center environment. For some of the IDs, refer to Table 3-8 for a link to additional information that is available in the manuals. Suite Installer user We recommend that you use the Windows Administrator or a dedicated user for the installation of TotalStorage Productivity Center. That user ID should have the user rights listed in Table 3-7. Table 3-7 Requirements for the Suite Installer user User rights/policy Used for Act as part of the operating system DB2 Productivity Center for Disk Fabric Manager Create a token object DB2 Productivity Center for Disk Increase quotas DB2 Productivity Center for Disk Replace a process-level token DB2 Productivity Center for Disk Log on as a service DB2 Debug programs Productivity Center for Disk Table 3-8 shows the user IDs that are used in a TotalStorage Productivity Center environment. It provides information about the Windows group to which the user ID must belong, whether it is a new user ID that is created during the installation, and when the user ID is used. Table 3-8 User IDs used in a IBM TotalStorage Productivity Center environment Element User ID New user Type Group or Usage groups Suite Installer Administrator No DB2 db2admina Yes, Windows DB2 management and Windows will be Service Account created IBM Director tpcadmina No Windows DirAdmin or Windows Service Account (see also “IBM DirSuper Director” on page 67) Chapter 3. Installation planning and considerations 65
  • 86. Element User ID New user Type Group or Usage groups Resource Manager managerb No, Tivoli N/A, internal Used during the registration of a default Agent user Resource Manager to the Agent user Manager Manager Common Agent AgentMgrb No Tivoli N/A, internal Used to authenticate agents and (see also “Common Agent user lock the certificate key files Agent” on page 67) Manager Common Agent itcauserb Yes, Windows Windows Windows Service Account will be created TotalStorage tpccimoma Yes, will Windows DirAdmin This ID is used to accomplish Productivity Center be connectivity with the managed universal user created devices. For example, this ID has to be set up on the CIM Agents. Tivoli NetView c Windows See “Fabric Manager User IDs” on page 68 IBM WebSphere a Windows See “Fabric Manager User IDs” on page 68 Host Authentication a Windows See “Fabric Manager User IDs” on page 68 a. This account can have any name you choose. b. This account name cannot be changed during the installation. c. The DB2 administrator user ID and password are used here. See “Fabric Manager User IDs” on page 68. Granting privileges Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center for Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, and the IBM TotalStorage Productivity Center for Replication. These user rights are governed by the local security policy and are not initially set as the defaults for administrators. They may not be in effect when you log on as the local administrator. If the IBM TotalStorage Productivity Center installation program does not detect the required user rights for the logged on user name, the program can optionally set them. The program can set the local security policy settings to assign these user rights. Alternatively, you can manually set them prior to performing the installation. To manually set these privileges, follow these steps: 1. Click Start →Settings →Control Panel. 2. Double-click Administrative Tools. 3. Double-click Local Security Policy. 4. The Local Security Settings window opens. Expand Local Policies. Then double-click User Rights Assignments to see the policies in effect on your system. For each policy added to the user, perform the following steps: a. Highlight the policy to be selected. b. Double-click the policy and look for the user’s name in the Assigned To column of the Local Security Policy Setting window to verify the policy setting. Ensure that the Local Policy Setting and the Effective Policy Setting options are selected. 66 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 87. c. If the user name does not appear in the list for the policy, you must add the policy to the user. Perform the following steps to add the user to the list: i. In the Local Security Policy Setting window, click Add. ii. In the Select Users or Groups window, under the Name column, highlight the user of group. iii. Click Add to place the name in the lower window. iv. Click OK to add the policy to the user or group. 5. After you set these user rights, either by using the installation program or manually, log off the system and then log on again for the user rights to take effect. 6. Restart the installation program to continue with the IBM TotalStorage Productivity Center for Disk and Replication Base. TotalStorage Productivity Center communication user The communication user account is used for authentication between several different elements of the environment. For example, if WebSphere Application Server is installed with the Suite Installer, its Administrator ID is the communication users. IBM Director With Version 4.1, you no longer need to create an “internal” user account. All user IDs must be operating system accounts and members of one of the following groups: DirAdmin or DirSuper groups (Windows), diradmin, or dirsuper groups (Linux) Administrator or Domain Administrator groups (Windows), root (Linux) In addition, a host authentication password is used to allow managed hosts and remote consoles to communicate with IBM Director. Resource Manager The user ID and password (default is manager and password) for the Resource Manager is stored in the AgentManagerconfigAuthorization.xml file on the Agent Manager. Since this is used only during the initial registration of a new Resource Manager, there is no problem with changing the values at any time. You can find a detailed procedure on how to change this in the Installation and Planning Guides of the corresponding manager. You can have multiple Resource Manager user IDs if you want to separate the administrators for the different managers, for example for IBM TotalStorage Productivity Center for Data and IBM TotalStorage Productivity Center for Fabric. Common Agent Each time the Common Agent is started, this context and password are used to validate the registration of the agent with the Tivoli Agent Manager. Furthermore the password is used to lock the certificate key files (agentTrust.jks). The default password is changeMe, but you should change the password when you install the Tivoli Agent Manager. The Tivoli Agent Manager stores this password in the AgentManager.properties file. If you start with the defaults, but want to change the password later, all the agents have to be changed. A procedure to change the password is available in the Installation and Planning Guides of the corresponding managers (at this time Data or Fabric). Since the password is used to lock the certificate files, you must also apply this change to Resource Managers. Chapter 3. Installation planning and considerations 67
  • 88. The Common Agent user ID AgentMgr is not a user ID, but rather the context in which the agent is registered at the Tivoli Agent Manager. There is no need to change this, so we recommend that you accept the default. TotalStorage Productivity Center universal user The account used to accomplish connectivity with managed devices has to be part of the DirAdmin (Windows) or diradmin (Linux) group. This user ID communicates with CIMOMs during install and post install. It also communicates with WebSphere. Fabric Manager User IDs During the installation of IBM TotalStorage Productivity Center for Fabric, you can select if you want to use individual passwords for such subcomponents as DB2, IBM WebSphere, NetView and the Host Authentication. You can also choose to use the DB2 administrator’s user ID and password to make the configuration simpler. Figure 4-117 on page 164 shows the window where you can choose the options. 3.5.2 Increasing user security The goal of increasing security is to have multiple roles available for the various tasks that can be performed. Each role is associated with a certain group. The users are only added to those groups that they need to be part of to fulfill their work. Not all components have the possibility to increase the security. Others methods require some degree of knowledge about the specific components to perform the configuration successfully. IBM TotalStorage Productivity Center for Data During the installation of Productivity Center for Data, you can enter the name of a Windows group. Every user within this group is allowed to manage Productivity Center for Data. Other users may only start the interface and look at it. You can add or change the name of that group later by editing the server.config file and restarting Productivity Center for Data. Productivity Center for Data does not support the following domain login formats for logging into its server component: (domain name)/(username) (username)@(domain) Because it does not support these formats, you must set up users in a domain account that can log into the server. Perform the following steps before you install Productivity Center for Data in your environment: 1. Create a Local Admin Group. 2. Create a Domain Global Group. 3. Add the Domain Global Group to the Local Admin Group. Productivity Center for Data looks up the SID for the domain user when the login occurs. You only need to specify a user name and password. 68 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 89. 3.5.3 Certificates and key files Within a TotalStorage Productivity Center environment, several applications use certificates to ensure security: Productivity Center for Disk, Productivity Center for Replication, and Tivoli Agent Manager. Productivity Center for Disk and Replication certificates The WebSphere Application Server that is part of Productivity Center for Disk and Replication uses certificates for Secure Sockets Layer (SSL) communication. During the installation, key files can be generated as self-signed certificates, but you must enter a password for each file to lock it. The default file names are: MDMServerKeyFile.jks MDServerTrusFile.jks The default directory for the key file on the Agent Manager is C:IBMmdmdmkeys. Tivoli Agent Manager certificates The Agent Manager comes with demonstration certificates that you can use. However, you can also create new certificates during the installation of Agent Manager (see Figure 4-26 on page 104). If you choose to create new files, the password that you enter on the panel, as shown in Figure 4-27 on page 105, as the Agent registration password is used to lock the agentTrust.jks key file. The default directory for that key file on the Agent Manager is C:Program FilesIBMAgentManagercerts. There are more key files in that directory, but during the installation and first steps, the agentTrust.jks file is the most important one. This is only important if you allow the installer to create your keys. 3.5.4 Services and service accounts The managers and components that belong to the TotalStorage Productivity Center are started as Windows Services. Table 3-9 provides an overview of the most important services. To keep it simple, we did not include all the DB2 services in the table. Table 3-9 Services and service accounts Element Service name Service account Comment DB2 db2admin The account needs to be part of Administrators and DB2ADMNS. IBM Director IBM Director Server Administrator You need to modify the account to be part of one of the groups: DirAdmin or DirSuper. Agent Manager IBM WebSphere Application LocalSystem You need to set this service to start Server V5 — Tivoli Agent automatically, after the installation. Manager Common Agent IBM Tivoli Common Agent — itcauser C:Program Filestivoliep Productivity Center IBM TotalStorage Productivity TSRMsrv1 for Data Center for Data server Productivity Center IBM WebSphere Application LocalSystem for Fabric Server V5 — Fabric Manager Chapter 3. Installation planning and considerations 69
  • 90. Element Service name Service account Comment Tivoli NetView Tivoli NetView Service NetView Service 3.6 Starting and stopping the managers To start, stop or restart one of the managers or components, you use the Windows control panel. Table 3-10 shows a list of the services. Table 3-10 Services used for TotalStorage Productivity Center Element Service name Service account DB2 db2admin IBM Director IBM Director Server Administrator Agent Manager IBM WebSphere Application Server V5 - Tivoli Agent Manager LocalSystem Common Agent IBM Tivoli Common Agent — C:Program Filestivoliep itcauser Productivity Center for Data IBM TotalStorage Productivity Center for Data Server TSRMsrv1 Productivity Center for Fabric IBM WebSphere Application Server V5 - Fabric Manager LocalSystem Tivoli NetView Service Tivoli NetView Service NetView 3.7 Windows Management Instrumentation Before beginning the Prerequisite Software installation, the Windows Management Instrumentation service must first be stopped and disabled. To disable the service, follow the steps below. 1. Go to Start → Settings → Control Panel → Administrative Tools → Services. 2. Scroll down and double-click the Windows Management Instrumentation service (see Figure 3-2 on page 71). 70 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 91. Figure 3-2 Windows Management Instrumentation service 3. In the Windows Management Instrumentation Properties window, go down to Service status and click the Stop button (Figure 3-3). Wait for the service to stop. Figure 3-3 Stopping Windows Management Instrumentation Chapter 3. Installation planning and considerations 71
  • 92. 4. After the service is stopped, in the Windows Management Instrumentation Properties window, change the Startup type to Disabled (Figure 3-4) and click OK. Figure 3-4 Disabled Windows Management Instrumentation 5. After disabling the service, it may start again. If so, go back and stop the service again. The service should now be stopped and disabled as shown in Figure 3-5. Figure 3-5 Windows Management Instrumentation successfully disabled Important: After the Prerequisite Software installation completes. You must enable the Windows Management Instrumentation service before installing the suite. To enable the service, change the Startup type from Disabled (see Figure 3-4) to Automatic. 72 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 93. 3.8 World Wide Web Publishing As with the Windows Management Instrumentation service, the World Wide Web Publishing service must also be stopped and disabled before starting the Prerequisite Software Installer. To stop the World Wide Web Publishing service, simply follow the same steps in section Figure 3.7 on page 70. This service can remain disabled. 3.9 Uninstalling Internet Information Services Make sure Internet Information Services (IIS) is not installed on the server. If it is installed, uninstall it using the following procedure. 1. Click Start → Settings → Control Panel. 2. Click Add/Remove Programs. 3. In the Add or Remove Programs window, click Add/Remove Windows Components. 4. In the Windows Components panel, deselect IIS. 3.10 Installing SNMP Before you install the components of the TotalStorage Productivity Center, install and configure SNMP. 1. Click Start → Settings → Control Panel. 2. Click Add/Remove Programs. 3. In the Add or Remove Programs window, click Add/Remove Windows Components. 4. Double-click Management and Monitoring Tools. 5. In the Windows Components panel, select Simple Network Management Protocol and click OK. 6. Close the panels and accept the installation of the components. 7. The Windows installation CD or installation files are required. Make sure that the SNMP services are configured as explained in these steps: a. Right-click My Computer and select Manage. b. In the Computer Management window, click Services and Applications. c. Double-click Services. 8. Scroll down to and double-click SNMP Service. 9. In the SNMP Service Properties window, follow these steps: 10.Click the Traps tab (see Figure 3-6 on page 74). Chapter 3. Installation planning and considerations 73
  • 94. d. Make sure that the public name is available. Figure 3-6 Traps tab in the SNMP Service Properties window e. Click the Security tab (see Figure 3-7). f. Select Accept SNMP packets from any host. g. Click OK. Figure 3-7 SNMP Security Properties window 11.After you set the public community name, restart the SNMP community service. 74 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 95. 3.11 IBM TotalStorage Productivity Center for Fabric Prior to installing IBM TotalStorage Productivity Center for Fabric, there are planning considerations and prerequisite tasks that you need to complete. 3.11.1 The computer name IBM TotalStorage Productivity Center for Fabric requires fully qualified host names for the manager, managed hosts, and the remote console. To verify your computer name on Windows, follow this procedure. 1. Right-click the My Computer icon on your desktop and select Properties. 2. The System Properties window opens. a. Click the Network Identification tab. Click Properties. b. The Identification Changes panel opens. i. Verify that your computer name is entered correctly. This is the name that the computer is identified as in the network. ii. Verify that the full computer name is a fully qualified host name. For example, user1.sanjose.ibm.com is a fully qualified host name. iii. Click More. c. The DNS Suffix and NetBIOS Computer Name panel opens. Verify that the Primary DNS suffix field displays a domain name. Important: The fully qualified host name must match the HOSTS file name (including case-sensitive characters). 3.11.2 Database considerations When you install IBM TotalStorage Productivity Center for Fabric, a DB2 database is automatically created if you specified the DB2 database. The default database name is TSANMDB. If you installed IBM TotalStorage Productivity Center for Fabric previously, are using a DB2 database, and want to save the information in the database before you re-install the manager, you must use DB2 commands to back up the database. The default name for the IBM TotalStorage Productivity Center for Fabric DB2 database is TSANMDB. The database name for Cloudscape is also TSANMDB. You cannot change this database name. If you are installing the manager on more than one machine in a Windows domain, the managers on different machines may end up sharing the same DB2 database. To avoid this situation, you must either use different database names or different DB2 user names when installing the manager on different machines. 3.11.3 Windows Terminal Services You cannot use the Windows Terminal Services to access a machine that is running the IBM TotalStorage Productivity Center for Fabric console (either the manager or remote console machine). Any TotalStorage Productivity Center for Fabric dialogs launched from the SAN menu in Tivoli NetView appear on the manager or remote console machine only. The dialogs do not appear in the Windows Terminal Services session. Chapter 3. Installation planning and considerations 75
  • 96. 3.11.4 Tivoli NetView IBM TotalStorage Productivity Center for Fabric also installs Tivoli NetView 7.1.3. If you already have Tivoli NetView 7.1.1 installed, IBM TotalStorage Productivity Center for Fabric upgrades it to version 7.1.3. If you have a Tivoli NetView release earlier than Version 7.1.1, IBM TotalStorage Productivity Center for Fabric prompts you to uninstall Tivoli NetView before you install this product. If you have Tivoli NetView 7.1.3 installed, ensure that the following applications are stopped. You can check for Tivoli NetView by opening the Tivoli NetView console icon on your desktop. Web Console Web Console Security MIB Loader MIB Browser Netmon Seed Editor Tivoli Event Console Adapter Important: Ensure that the Windows 2000 Terminal Services is not running. Go to the Services panel and check for Terminal Services. User IDs and password considerations TotalStorage Productivity Center for Fabric only supports local user IDs and groups. It does not support domain user IDs and groups. Cloudscape database If you install TotalStorage Productivity Center for Fabric and specify the Cloudscape database, you need the following user IDs and passwords: Agent manager name or IP address and password Common agent password to register with the Agent Manager Resource manager user ID and password to register with the Agent Manager WebSphere administrative user ID and password host authentication password Tivoli NetView password only DB2 database If you install IBM TotalStorage Productivity Center for Fabric and specify the DB2 database, you need the following user IDs and passwords: Agent manager name or IP address and password Common agent password to register with the Agent Manager Resource manager user ID and password to register with the Agent Manager DB2 administrator user ID and password DB2 user ID and password WebSphere administrative user ID and password Host authentication password only Tivoli NetView password only Note: If you are running Windows 2000, when the IBM TotalStorage Productivity Center for Fabric installation program asks for an existing user ID for WebSphere, that user ID must act as part of the operating system user. WebSphere To change the WebSphere user ID and password, follow this procedure: 1. Open the install_locationappswaspropertiessoap.client.props file. 76 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 97. 2. Modify the following entries: – com.ibm.SOAP. login Userid=user_ID (enter a value for user_ID) – com.ibm.SOAP. login Password=password (enter a value for password) 3. Save the file. 4. Run the following script: ChangeWASAdminPass.bat user_ID password install_dir Here user_ID is the WebSphere user ID and password is the password. install_dir is the directory where the manager is installed and is optional. For example, install_dir is c:Program FilesIBMTPCFabricmanagerbinW32-ix86. 3.11.5 Personal firewall If you have a software firewall on your system, disable the firewall while installing the Fabric Manager. The firewall causes Tivoli NetView installation to fail. You can enable the firewall after you install the Fabric Manager. Security considerations You set up security by using certificates. There are demonstration certificates or you can generate new certificates. This option is specified when you installed the Agent Manager. See Figure 4-26 on page 104. We recommend that you generate new certificates. If you used the demonstration certificates, continue with the installation. If you generated new certificates, follow this procedure: 1. Copy the manager CD image to your computer. 2. Copy the agentTrust.jks file from the Agent Manager (AgentManager/certs directory) to the /certs directory of the manager CD image. This overwrites the existing agentTrust.jks file. 3. You can write a new CD image with the new file or keep this image on your computer and point the Suite Installer to the directory when requested. 3.11.6 Changing the HOSTS file When you install Service Pack 3 for Windows 2000 on your computers, follow these steps to avoid addressing problems with IBM TotalStorage Productivity Center for Fabric. The problem is caused by the address resolution protocol, which returns the short name and not the fully qualified host name. You can avoid this problem by changing the entries in the corresponding host tables on the Domain Name System (DNS) server and on the local computer. The fully qualified host name must be listed before the short name as shown in Example 3-1. See 3.11.1, “The computer name” on page 75, for details about determining the host name. To correct this problem, you have to edit the HOSTS file. The HOSTS file is in the %SystemRoot%system32driversetc directory. Example 3-1 Sample HOSTS file # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. Chapter 3. Installation planning and considerations 77
  • 98. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host 127.0.0.1 localhost # 192.168.123.146 jason.groupa.mycompany.com jason 192.168.123.146 jason jason.groupa.mycompany.com Note: Host names are case sensitive, which is limitation within WebSphere. Check your host name. 3.12 IBM TotalStorage Productivity Center for Data Prior to installing IBM TotalStorage Productivity Center for Data, there are planning considerations and prerequisite tasks that you need to complete. 3.12.1 Server recommendations The IBM TotalStorage Productivity Center for Data server component acts as a traffic officer for directing information and handling requests from the agent and UI components installed within an environment. You need to install at least one server within your environment. We recommend that you do not manage more than 1000 agents with a single server. If you need to install more than 1000 agents, we suggest that you install an additional server for those agents to maintain optimal performance. 3.12.2 Supported subsystems and databases This section contains the subsystems, file system formats, and databases that the TotalStorage Productivity Center for Data supports. Storage subsystem support Data Manager currently supports the monitoring and reporting of the following storage subsystems: Hitachi Data Systems HP StorageWorks IBM FAStT 200, 600, 700, and 900 with an SMI-S 1.0 compliant CIM interface SAN Volume Controller Console Version 1.1.0.2, 1.1.0.9, 1.2.0.5, 1.2.0.6 (1.3.2 Patch available), 1.2.1.x, 1.2.0.6, SAN Volume Controller CIMOM Version 1.1.0.1, 1.2.0.4, 1.2.0.5 (1.3.2 patch available), 1.2.0.5, 1.2.1.x ESS ICAT 1.1.0.2, 1.2.0.15, 1.2.0.29, 1.2.x, 1.2.1.40 and later for ESS 78 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 99. File system support Data Manager supports the monitoring and reporting of the following file systems: FAT FAT32 NTFS4, NTFS5 EXT2, EXT3 AIX_JFS HP_HFS VXFS UFS TMPFS AIX_OLD NW_FAT NW_NSS NF WAFL FAKE AIX_JFS2 SANFS REISERFS Network File System support Data Manager currently supports the monitoring and reporting of the following Network File Systems (NFS): IBM TotalStorage SAN File System 1.0 (Version 1 Release 1), from AIX V5.1 (32-bit) and Windows 2000 Server/Advanced Server clients IBM TotalStorage SAN File System 2.1, 2.2 from AIX V5.1 (32-bit), Windows 2000 Server/Advanced Server, Red Hat Enterprise Linux 3.0 Advanced Server, and SUN Solaris 9 clients General Parallel File System (GPFS) v2.1, v2.2 RDBMS support Data Manager currently supports the monitoring of the following relational database management systems (RDBMS): Microsoft SQL Server 7.0, 2000 Oracle 8i, 9i, 9i V2, 10G Sybase SQL Server 11.0.9 and higher DB2 Universal Database™ (UDB) 7.1, 7.2, 8.1, 8.2 (64-bit UDB DB2 instances are supported) 3.12.3 Security considerations This section describes the security issues that you must consider when installing Data Manager. Chapter 3. Installation planning and considerations 79
  • 100. User levels There are two levels of users within IBM TotalStorage Productivity Center for Data: non-administrator users and administrator users. The level of users determine how they use IBM TotalStorage Productivity Center for Data. Non-administrator users – View the data collected by IBM TotalStorage Productivity Center for Data. – Create, generate, and save reports. IBM TotalStorage Productivity Center for Data administrators. These users can: – Create, modify, and schedule Pings, Probes, and Scans – Create, generate, and save reports – Perform administrative tasks and customize the IBM TotalStorage Productivity Center for Data environment – Create Groups, Profiles, Quotas, and Constraints – Set alerts Important: Security is set up by using the certificates. You can use the demonstration certificates or you can generate new certificates. It is recommended that you generate new certificates when you install the Agent Manager. Certificates If you generated new certificates, follow this procedure: 1. Copy the CD image to your computer. 2. Copy the agentTrust.jks file from the Agent Manager directory AgentManager/certs to the CommonAgentcerts directory of the manager CD image. This overwrites the existing agentTrust.jks file. You can write a new CD image with the new file or keep this image on your computer and point the Suite Installer to the directory when requested. Important: Before installing IBM TotalStorage Productivity Center for Data, define the group within your environment that will have administrator rights within Data Manager. This group must exist on the same machine where you are installing the Server component. During the installation, you are prompted to enter the name of this group. 80 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 101. 3.12.4 Creating the DB2 database Before you install the component, create the IBM TotalStorage Productivity Center for Data database. 1. From the start menu, select Start →Programs →IBM DB2 →General Administration Tools →Control Center. 2. This launches the DB2 Control Center. Create a database that is used for IBM TotalStorage Productivity Center for Data as shown in Figure 3-8. Select All Databases, right-click and select Create Databases →Standard. Figure 3-8 DB2 database creation Chapter 3. Installation planning and considerations 81
  • 102. 3. In the window that opens (Figure 3-9), complete the required database name information. We used the database name of TPCDATA. Click Finish to complete the database creation. Figure 3-9 DB2 database information for creation 82 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 103. 4 Chapter 4. Installing the IBM TotalStorage Productivity Center suite Installation of the TotalStorage Productivity Center suite of products is done using the install wizards. The first, the Prerequisite Software Installer, installs all the products needed before one can install the TotalStorage Productivity Center suite. The second, the Suite Installer, installs the individual components or the entire suite of products. This chapter documents the use of the Prerequisite Software Installer and the Suite Installer. It also includes hints and tips based on our experience. © Copyright IBM Corp. 2005. All rights reserved. 83
  • 104. 4.1 Installing the IBM TotalStorage Productivity Center IBM TotalStorage Productivity Center provides a Prerequisite Software Installer and Suite Installer that helps guide you through the installation process. You can also use the Suite Installer to install stand-alone components. The Prerequisite Software Installer installs the following products in this order: 1. DB2, which is required by all the managers 2. WebSphere Application Server, which is required by all the managers except for TotalStorage Productivity Center for Data 3. Tivoli Agent Manager, which is required by Fabric Manager and Data Manager The Suite Installer installs the following products or components in this order: 1. IBM Director, which is required by TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication 2. Productivity Center for Disk and Replication Base, which is required by TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication 3. TotalStorage Productivity Center for Disk 4. TotalStorage Productivity Center for Replication 5. TotalStorage Productivity Center for Fabric - Manager 6. TotalStorage Productivity Center for Data - Manager In addition to the manager installations, the Suite Installer guides you through the installation of other IBM TotalStorage Productivity Center components. You can select more than one installation option at a time. This redbook separates the types of installations into several sections to help explain them. The additional types of installation tasks are: IBM TotalStorage Productivity Center Agent installations IBM TotalStorage Productivity Center GUI/Client installations Language Pack installations IBM TotalStorage Productivity Center product uninstallations 4.1.1 Considerations You may want to use IBM TotalStorage Productivity Center for Disk to manage the IBM TotalStorage Enterprise Storage Server (ESS), DS8000, DS6000, Storage Area Network (SAN) Volume Controller (SVC), IBM TotalStorage Fibre Array Storage Technology (FAStT), or DS4000 storage subsystems. In this case, you must install the prerequisite input/output (I/O) Subsystem Licensed Internal Code (SLIC) and Common Information Model (CIM) Agent for the devices. See Chapter 6, “Configuring IBM TotalStorage Productivity Center for Disk” on page 247, for more information. If you are installing the CIM Agent for the ESS, or the DS8000 or DS6000 you must install it on a separate machine. TotalStorage Productivity Center 2.3 does not support Linux on zSeries or on S/390®. Nor does IBM TotalStorage Productivity Center support Windows domains. 84 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 105. 4.2 Prerequisite Software Installation This section guides you step by step through the install process of the prerequisite software components. 4.2.1 Best practices Before you begin installing the prerequisite software components, we recommend that you complete the following tasks: 1. Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center components, including the IBM TotalStorage Productivity Center for Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, IBM TotalStorage Productivity Center for Replication, IBM TotalStorage Productivity Center for Data and IBM TotalStorage Productivity Center for Fabric. For details refer to “Granting privileges” on page 66. 2. Make sure Internet Information Services (IIS) is not installed on the server. If it is installed, uninstall it using the procedure in 3.9, “Uninstalling Internet Information Services” on page 73. 3. Install and configure Simple Network Management Protocol (SNMP) described in 3.10, “Installing SNMP” on page 73. 4. Stop and disable Windows Management Instrumentation (Figure 3.7 on page 70) and World Wide Web Publishing (3.8, “World Wide Web Publishing” on page 73) services. 5. Create a database for Agent Manager installation. To create the database, see 3.12.4, “Creating the DB2 database” on page 81. The default database name for Agent Manager is IBMCDB. 4.2.2 Installing prerequisite software Follow these steps to install the prerequisite software components: 1. Insert the IBM TotalStorage Productivity Center Prerequisite Software Installer CD into the CD-ROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CD-ROM drive. Double-click setup.exe. Note: It may take a few moments for the installer program to initialize. Be patient. Eventually, you see the language selection panel (Figure 4-1). 2. The installer language window (Figure 4-1) opens. From the list, select a language. This is the language that is used to install this product. Click OK. Figure 4-1 Prerequisite Software Installer language Chapter 4. Installing the IBM TotalStorage Productivity Center suite 85
  • 106. 3. The Prerequisite Software Installer wizard welcome pane in Figure 4-2 opens. Click Next. The Software License Agreement panel is then displayed. Read the terms of the license agreement. If you agree with the terms of the license agreement select the I accept the terms in the license agreement radio button and click Next to continue. Figure 4-2 Prerequisite Software Installer wizard 4. The prerequisite operating system check panel in Figure 4-3 on page 87 opens. When it completes successfully click Next. 86 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 107. Figure 4-3 Prerequisite Operating System check 5. The Tivoli Common Directory location panel (Figure 4-4) opens and prompts for a location for the log files. Accept the default location or enter a different location. Click Next to continue. Figure 4-4 Tivoli Common Directory location Chapter 4. Installing the IBM TotalStorage Productivity Center suite 87
  • 108. 6. The product selection panel (Figure 4-5) opens. To install the entire TotalStorage Productivity Center suite, check the boxes next to DB2, WebSphere, and Agent Manager. Figure 4-5 Product selection 7. The DB2 Universal Database panel (Figure 4-6) opens. Select Enterprise Server Edition and click Next to continue. Figure 4-6 DB2 Universal Database 88 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 109. Note: After clicking Next (Figure 4-6), if you see the panel in Figure 4-7, you must first stop and disable the Windows Management Instrumentation service before continuing with the installation. See Figure 3.7 on page 70 for detailed instructions. Figure 4-7 Windows Management Instrumentation service warning Chapter 4. Installing the IBM TotalStorage Productivity Center suite 89
  • 110. 8. The DB2 user name and password panel (Figure 4-8) opens. If the DB2 user name exists on the system, the correct password must be entered or the DB2 installation will fail. If the DB2 user name does not exist it will be created by the DB2 install. In our installation we accepted the default user name and entered a unique password. Click Next to continue. Figure 4-8 DB2 user configuration 90 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 111. 9. The Target Directory Confirmation panel (Figure 4-9) opens. Accept the default target directories for DB2 installation or enter a different location. Click Next. Figure 4-9 Target Directory Confirmation 10.The select the languages panel (Figure 4-10) opens. This installs the languages selected for DB2. Select your desired language(s). Click Next. Figure 4-10 Language selection Chapter 4. Installing the IBM TotalStorage Productivity Center suite 91
  • 112. 11.The Preview Prerequisite Software Information panel (Figure 4-11) opens. Review the information and click Next. Figure 4-11 Preview Prerequisite Software Information 12.The WebSphere Application Server system prerequisites check panel (Figure 4-12) opens. When the check completes successfully click Next. Figure 4-12 WebSphere Application Server system prerequisites check 92 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 113. 13.The installation options panel (Figure 4-13) opens. Select the type of installation you wish to perform. The rest of this section guides you through Unattended Installation. Unattended Installation guides you through copying all installation images to a central location called the installation image depot. Once the copies are completed, the component installations proceed with no further intervention needed. Attended Installation prompts you to enter the location of each install image as needed. Click Next to continue. Figure 4-13 Installation options Chapter 4. Installing the IBM TotalStorage Productivity Center suite 93
  • 114. 14.The install image depot location panel opens (see Figure 4-14). Enter the location where all installation images are to be copied. Click Next. Figure 4-14 Install image depot location 15.You are first prompted for the location of the DB2 installation image (see Figure 4-15). Browse to the installation image and select the path to the installation files or insert the install CD and click Copy. Figure 4-15 DB2 installation source 94 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 115. 16.After the DB2 installation image is copied to the install image depot, you are prompted for the location of the WebSphere installation image (see Figure 4-16). Browse to the installation image and select the path to the installation files or insert the install CD and click Copy. Figure 4-16 WebSphere installation source 17.After the WebSphere installation image is copied, you are prompted for the location of the WebSphere Cumulative fix 3 installation image (see Figure 4-17). Browse to the installation image and select the path to the installation files or insert the install CD and click Copy. Figure 4-17 WebSphere fix 3 installation source Chapter 4. Installing the IBM TotalStorage Productivity Center suite 95
  • 116. 18.When an install image has been successfully copied to the Install Image Depot, a green check mark appears to the right of the prerequisite. After all the prerequisite software images are successfully copied to the install image depot (Figure 4-18), click Next. Figure 4-18 Installation images copied successfully 96 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 117. 19.The installation of DB2, WebSphere, and the WebSphere Fix Pack begins. When a prerequisite is successfully installed, a green check mark appears to its left. If the installation of a prerequisite fails, a red X appears to the left. If a prerequisite installation fails, exit the installer, check the logs to determine and correct the problem, and restart the installer. When the installation completes successfully (see Figure 4-19), click Next. Figure 4-19 DB2 and WebSphere installation complete Chapter 4. Installing the IBM TotalStorage Productivity Center suite 97
  • 118. 20.The Agent Manager Registry Information panel opens. Select the type of database, specify the database name, and choose a local or remote database. The default DB2 database name is IBMCDB. For a local database connection, the DB2 database will be created if it does not exist. We recommend that you take the default database name for a local database. Click Next to continue (see Figure 4-20). Attention: For a remote database connection, the database specified below must exist. Refer to 3.12.4, “Creating the DB2 database” on page 81 for information on how to create a database in DB2. Figure 4-20 Agent Manager Registry Information 98 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 119. 21.The Database Connection Information panel in Figure 4-21 opens. Specify the location of the database software directory (for DB2, the default install location is C:Program FilesIBMSQLLIB), the database user name and password. You must specify the database host name and port if you are using a remote database. Click Next to continue. Figure 4-21 Agent Manager database connection Information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 99
  • 120. Note: For a remote database connection the database specified in Figure 4-20 on page 98 must exist. If the database does not exist, you will see the error message shown in Figure 4-22. Refer to 3.12.4, “Creating the DB2 database” on page 81 for information on how to create a database in DB2. Figure 4-22 DB2 database error 100 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 121. 22.A panel opens prompting for a location to install Tivoli Agent Manager (see Figure 4-23). Accept the default location or enter a different location. Click Next to continue. Figure 4-23 Tivoli Agent Manager installation directory Chapter 4. Installing the IBM TotalStorage Productivity Center suite 101
  • 122. 23.The WebSphere Application Server Information panel (Figure 4-24) opens. This panel lets you specify the host name or IP address, and the cell and node names on which to install the Agent Manager. If you specify a host name, use the fully qualified host name. For example, specify HELIUM.almaden.ibm.com. If you use the IP address, use a static IP address. This value is used in the URLs for all Agent Manager services. We recommend that you use the fully qualified host name, not the IP address of the Agent Manager server. Typically the cell and node name are both the same as the host name of the computer. If WebSphere was installed before you started the Agent Manager installation wizard, you can look up the cell and node name values in the %WebSphere Application Server_INSTALL_ROOT%binSetupCmdLine.bat file. You can also specify the ports used by the Agent Manager. We recommend that you accept the defaults. – Registration Port: The default is 9511 for the server-side Secure Sockets Layer (SSL). – Secure Port: The default is 9512 for client authentication, two-way SSL. – Public Port: The default is 9513. If you are using WebSphere network deployment or a customized deployment, make sure that the cell and node names are correct. For more information about WebSphere deployment, see your WebSphere documentation. After filling in the required information in the WebSphere Application Server Information panel, click Next. Figure 4-24 WebSphere Application Server Information 102 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 123. Note: If an IP address is entered in the WebSphere Application Server Information panel shown in Figure 4-24, the next panel (see Figure 4-25) explains why a host name is recommended. Click Back to use a host name or click Next to use the IP address. Figure 4-25 Agent Manager IP address warning Chapter 4. Installing the IBM TotalStorage Productivity Center suite 103
  • 124. 24.The Security Certificates panel (Figure 4-26) opens. Specify whether to create new certificates or to use the demonstration certificates. In a typical production environment, you would create new certificates. The ability to use demonstration certificates is provided as a convenience for testing and demonstration purposes. Make a selection and click Next. Figure 4-26 Security Certificates 104 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 125. 25.The Security Certificate Settings panel (see Figure 4-27) opens. Specify the certificate authority name, security domain, and agent registration password. The agent registration password is used to register the agents. You must provide this password when you install the agents. This password also sets the Agent Manager key store and trust store files. Record this password, it will be used again in the installation process. The domain name is used in the right-hand portion of the distinguished name (DN) of every certificate issued by the Agent Manager. It is the name of the security domain defined by the Agent Manager. Typically, this value is the registered domain name or contains the registered domain name. For example, for the computer system myserver.ibm.com, the domain name is ibm.com. This value must be unique in your environment. If you have multiple Agent Managers installed, this value must be different on each Agent Manager. The default agent registration password is changeMe and it is case sensitive. Click Next to continue. Figure 4-27 Security Certificate Settings Chapter 4. Installing the IBM TotalStorage Productivity Center suite 105
  • 126. 26.The User input summary panel for Agent Manager (see Figure 4-28) opens. Review the information and click Next. Figure 4-28 User input summary 106 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 127. 27.The summary information for Agent Manager panel (see Figure 4-29) opens. Click Next. Figure 4-29 Agent Manager installation summary Chapter 4. Installing the IBM TotalStorage Productivity Center suite 107
  • 128. 28.You will see a panel indicating the status of the Agent Manager install process. the IBMCDB database will be created and tables are added to the database. Once the installation of agent manager completes the Summary of Installation and Configuration Results panel (see Figure 4-30) opens. Click Next to continue. Figure 4-30 Summary of Installation and Configuration Results 108 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 129. 29.The next panel (Figure 4-31) informs you when the Agent Manager service started successfully. Click Finish. Figure 4-31 Agent Manager service started Chapter 4. Installing the IBM TotalStorage Productivity Center suite 109
  • 130. 30.The next panel (Figure 4-32) indicates the installation of prerequisite software is complete. Click Finish to exit the prerequisite installer. Figure 4-32 Prerequisite software installation complete 4.3 Suite installation This section guides you through the step by step process to install the TotalStorage Productivity Center components you select. The Suite Installer launches the installation wizard for each manager you chose to install. 4.3.1 Best practices Before you begin installing the suite of products complete the following tasks. 1. If you are running the Fabric Manager installation under Windows 2000, the Fabric Manager installation requires the user ID to have the following user rights: Act as part of the operating system Log on as a service user rights see Granting privileges under 3.5.1, “User IDs” on page 65 2. Enable Windows Management Instrumentation (see Figure 3.7 on page 70) 3. Install SNMP (see 3.10, “Installing SNMP” on page 73) 4. Create the database for the TotalStorage Productivity Center for Data installation (see 3.12.4, “Creating the DB2 database” on page 81). 4.3.2 Installing the TotalStorage Productivity Center suite Follow these steps for successful installation: 110 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 131. 1. Insert the IBM TotalStorage Productivity Center Suite Installer CD into the CD-ROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CD-ROM drive. Double-click setup.exe. Note: It may take a few moments for the installer program to initialize. Be patient. Eventually, you see the language selection panel (Figure 4-33). 2. The Installer language window (see Figure 4-33) opens. From the list, select a language. This is the language used to install this product. Click OK. Figure 4-33 Installer Wizard 3. You see the Welcome to the InstallShield Wizard for The IBM TotalStorage Productivity Center panel (see Figure 4-34). Click Next. Figure 4-34 Welcome to IBM TotalStorage Productivity Center panel 4. The Software License Agreement panel (Figure 4-35 on page 112) opens. Read the terms of the license agreement. If you agree with the terms of the license agreement, select the I accept the terms of the license agreement radio button. Then click Next. If you do not accept the terms of the license agreement, the installation program ends without installing IBM TotalStorage Productivity Center components. Chapter 4. Installing the IBM TotalStorage Productivity Center suite 111
  • 132. Figure 4-35 License agreement 5. The next panel enables you to select the type of installation (Figure 4-36). Select Manager installations of Data, Disk, Fabric, and Replication and then click Next. Figure 4-36 IBM TotalStorage Productivity Center options panel 112 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 133. 6. In the next panel (see Figure 4-37), select the components that you want to install. Click Next to continue. Figure 4-37 IBM TotalStorage Productivity Center components Chapter 4. Installing the IBM TotalStorage Productivity Center suite 113
  • 134. 7. The suite installer installs the IBM Director first (see Figure 4-38). Click Next. Figure 4-38 IBM Director prerequisite install 8. The IBM Director installation is now ready to begin (see Figure 4-39). Click Next. Figure 4-39 Begin IBM Director installation 114 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 135. 9. The package location for IBM Director panel (see Figure 4-40) opens. Enter the appropriate information and click Next. Note: Make sure the Windows Management Instrumentation service is disabled (see Figure 3.7 on page 70 for detailed instructions). If it is enabled, a window appears prompting you to disable the service after you click Next to continue. Figure 4-40 IBM Director package location 10.The next panel (see Figure 4-41) provides information about the IBM Director post installation reboot option. When prompted, choose the option to reboot later. Click Next. Figure 4-41 IBM Director information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 115
  • 136. 11.The IBM Director Server - InstallShield Wizard panel (Figure 4-42) opens. It indicates that the IBM Director installation wizard will launch. Click Next. Figure 4-42 IBM Director InstallShield Wizard 12.The License Agreement window opens (Figure 4-43). Read the license agreement. Click I accept the terms in the license agreement radio button and then click Next. Figure 4-43 IBM Director license agreement 116 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 137. 13.The next window (Figure 4-44) displays an advertisement for Enhance IBM Director with the new Server Plus Pack window. Click Next. Figure 4-44 IBM Director new Server Plus Pack window 14.The Feature and installation directory window (Figure 4-45) opens. Accept the default settings and click Next. Figure 4-45 IBM Director feature and installation directory window Chapter 4. Installing the IBM TotalStorage Productivity Center suite 117
  • 138. 15.The IBM Director service account information window (see Figure 4-46) opens. a. Type the domain for the IBM Director system administrator. Alternatively, if there is no domain, then type the local host name (the recommended setup). b. Type a user name and password for IBM Director. The IBM Director will run under this user name and you will log on to the IBM Director console using this user name. In our installation we used the user ID we created to install the TotalStorage Productivity Center. This user must be part of the Administrator group. c. Click Next to continue. Figure 4-46 Account information 16.The Encryption settings window (Figure 4-47) opens. Accept the default settings in the Encryption settings window. Click Next. Figure 4-47 Encryption settings 118 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 139. 17.In the Software Distribution settings window (Figure 4-48), accept the default values and click Next. Note: The TotalStorage Productivity Center components do not use the software-distribution packages function of IBM Director. Figure 4-48 Installation target directory 18.The Ready to Install the Program window (Figure 4-49) opens. Click Install. Figure 4-49 Installation ready Chapter 4. Installing the IBM TotalStorage Productivity Center suite 119
  • 140. 19.The Installing IBM Director server window (Figure 4-50) reports the status of the installation. Figure 4-50 Installation progress 20.The Network driver configuration window (Figure 4-51) opens. Accept the default settings and click OK. Figure 4-51 Network driver configuration The secondary window closes and the installation wizard performs additional actions which are tracked in the status window. 120 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 141. 21.The Select the database to be configured window (Figure 4-52) opens. Select IBM DB2 Universal Database and click Next. Figure 4-52 Database selection 22.The IBM Director DB2 Universal Database configuration window (Figure 4-53) opens. It may be behind the status window. You must click this window to bring it to the foreground. a. In the Database name field, type a new database name for the IBM Director database table or type an existing database name. b. In the User ID and Password fields, type the DB2 user ID and password that you created during the DB2 installation. c. Click Next to continue. Figure 4-53 Database selection configuration Chapter 4. Installing the IBM TotalStorage Productivity Center suite 121
  • 142. 23.In the IBM Director DB2 Universal Database configuration secondary window (Figure 4-54), accept the default DB2 node name LOCAL - DB2. Click OK. Figure 4-54 Database node name selection 24.The Database configuration in progress window is displayed at the bottom of the IBM Director DB2 Universal Database configuration window. Wait for the configuration to complete and the secondary window to close. 25.When the InstallShield Wizard Completed window (Figure 4-55) opens, click Finish. Figure 4-55 Completed installation Important: Do not reboot the machine at the end of the IBM Director installation. The Suite Installer reboots the machine. 122 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 143. 26.When you see IBM Director Server Installer Information window (Figure 4-56), click No. Figure 4-56 IBM Director reboot option Important: Are you installing IBM TotalStorage Productivity Center for Data? If so, have you created the database for IBM TotalStorage Productivity Center for Data or are you using a existing database? If you are installing Tivoli Disk manager, you must have created the administrative superuser ID and group and set the privileges. 27.The Install Status panel (see Figure 4-57) opens after a successful installation. Click Next. Figure 4-57 IBM Director Install Status successful Chapter 4. Installing the IBM TotalStorage Productivity Center suite 123
  • 144. 28.In the machine reboot window (see Figure 4-58), click Next to reboot the machine. Important: If the server does not reboot at this point, cancel the installer and reboot the server. Figure 4-58 Install wizard completion 124 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 145. 4.3.3 IBM TotalStorage Productivity Center for Disk and Replication Base There are three separate installations to perform: Install the IBM TotalStorage Productivity Center for Disk and Replication Base code. Install the IBM TotalStorage Productivity Center for Disk. Install the IBM TotalStorage Productivity Center for Replication. IBM TotalStorage Productivity Center for Disk and Replication Base must be installed by a user who is logged on as a local administrator (for example, as the administrator user) on the system where the IBM TotalStorage Productivity Center for Disk and Replication Base will be installed. If you intend to install IBM TotalStorage Productivity Center for Disk and Replication Base as a server, you need the following required system privileges, called user rights, to successfully complete the installation as described in 3.5.1, “User IDs” on page 65. Act as part of the operating system Create a token object Increase quotas Replace a process level token Debug programs After rebooting the machine the installer initializes to continue the suite install. A window opens prompting you to select the installation language to be used for this wizard (Figure 4-59). Select the language and click OK. Figure 4-59 Selecting the language for the IBM TotalStorage Productivity Center installation wizard Chapter 4. Installing the IBM TotalStorage Productivity Center suite 125
  • 146. 1. The next panel enables you to select the type of installation (Figure 4-60). Select Manager installations of Data, Disk, Fabric, and Replication and click Next. Figure 4-60 IBM TotalStorage Productivity Center options panel 2. The next window (Figure 4-61) opens allowing you to select which components to install. Select the components you wish to install (all components in this case) and click Next. Figure 4-61 TotalStorage Productivity Center components 126 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 147. 3. The installer checks that all prerequisite software is installed on your system (see Figure 4-62). Click Next. Figure 4-62 Prerequisite software check 4. Figure 4-63 shows the Installer window about to begin installation of Productivity Center for Disk and Replication Base. The window also displays the products that are yet to be installed. Click Next to begin the installation. Figure 4-63 IBM TotalStorage Productivity Center installation information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 127
  • 148. 5. The Package Location for Disk and Replication Manager window (Figure 4-64) opens. Enter the appropriate information and click Next. Figure 4-64 Package location for Productivity Center Disk and Replication 6. The Information for Disk and Replication Base Manager panel (see Figure 4-65) opens. Click Next. Figure 4-65 Installer information 128 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 149. 7. The Welcome panel (see Figure 4-66) opens. It indicates that the Disk and Replication Base Manager installation wizard will be launched. Click Next. Figure 4-66 IBM TotalStorage Productivity Center for Disk and Replication Base welcome information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 129
  • 150. 8. In the Destination Directory panel (Figure 4-67), you confirm the target directories. Enter the directory path or accept the default directory and click Next. Figure 4-67 IBM TotalStorage Productivity Center for Disk and Replication Base Installation directory 9. In the IBM WebSphere Instance Selection panel (see Figure 4-68), click Next. Figure 4-68 WebSphere Application Server information 130 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 151. 10.If the installation user ID privileges were not set, you see an information panel stating that you need to set the privileges (see Figure 4-69). Click Yes. Figure 4-69 Verifying the effective privileges 11.The required user privileges are set and an informational window opens (see Figure 4-70). Click OK. Figure 4-70 Message indicating the enablement of the required privileges 12.At this point, the installation terminates. You must close the installer. Log off of Windows, log back on again, and then restart the installer. Chapter 4. Installing the IBM TotalStorage Productivity Center suite 131
  • 152. 13.In the Installation Type panel (Figure 4-71), select Typical and click Next. Figure 4-71 IBM TotalStorage Productivity Center for Disk and Replication Base type of installation 14.If the IBM Director Support Program and IBM Director Server service is still running, the Servers Check panel (see Figure 4-72) opens and prompts you to stop the services. Click Next to stop the services. Figure 4-72 Server checks 132 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 153. 15.In the User Name Input 1 of 2 panel (Figure 4-73), enter the name and password for the IBM TotalStorage Productivity Center for Disk and Replication Base super user ID. This user name must be defined to the operating system. In our environment we used tpccimom as our super user. After entering the required information click Next to continue. Figure 4-73 IBM TotalStorage Productivity Center for Disk and Replication Base superuser information 16.If the specified super user ID is not defined to the operating system a window asking if you would like to create it appears (see Figure 4-74). Click Yes to continue. Figure 4-74 Create new local user account Chapter 4. Installing the IBM TotalStorage Productivity Center suite 133
  • 154. 17.In the User Name Input 2 of 2 panel (Figure 4-75), enter the user name and password for the IBM DB2 Universal Database Server. This is the user ID that was specified when DB2 was installed (see Figure 4-8 on page 90). Click Next to continue. Figure 4-75 IBM TotalStorage Productivity Center for Disk and Replication Base DB2 user information 134 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 155. 18.The SSL Configuration panel (Figure 4-76) opens. If you selected IBM TotalStorage Productivity Center for Disk and Replication Base Server, then you must enter the fully qualified name of the two server key files that were generated previously or that must be generated during or after the IBM TotalStorage Productivity Center for Disk and Replication Base installation. The information that you enter will be used later. a. Choose either of the following options: • Generate a self-signed certificate: Select this option if you want the installer to automatically generate these certificate files. We generate the certificates in our installation. • Defer the generation of the certificate as a manual post-installation task: Select this option if you want to manually generate these certificate files after the installation, using WebSphere Application Server ikeyman utility. b. Enter the Key file and Trust file passwords. The passwords must be a minimum of six characters in length and cannot contain spaces. You should record the passwords in the worksheets provided in Appendix A, “Worksheets” on page 991. c. Click Next. Figure 4-76 Key and Trust file options Chapter 4. Installing the IBM TotalStorage Productivity Center suite 135
  • 156. The Generate Self-Signed Certificate window opens (see Figure 4-77). Complete all the required fields and click Next to continue. Figure 4-77 IBM TotalStorage Productivity Center for Disk and Replication Base Certificate information 136 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 157. 19.Next you see the Create Local Database window (Figure 4-78). Accept the default database name of DMCOSERV, or optionally enter the database name. Click Next to continue. Note: The database name must be unique to IBM TotalStorage Productivity Center for Disk and Replication Base. You cannot share the IBM TotalStorage Productivity Center for Disk and Replication Base database with any other applications. Figure 4-78 IBM TotalStorage Productivity Center for Disk and Replication Base database name Chapter 4. Installing the IBM TotalStorage Productivity Center suite 137
  • 158. 20.The Preview window (Figure 4-79) displays a summary of all of the choices that you made during the customizing phase of the installation. Click Install to complete the installation. Figure 4-79 IBM TotalStorage Productivity Center for Disk and Replication Base Installer information 138 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 159. 21.The DB2 database is created, the keys are generated, and the Productivity Center for Disk and Replication base is installed. The Finish window opens. You can view the log file for any possible error messages. The log file is located in installeddirectorylogsdmlog.txt. The dmlog.txt file contains a trace of the installation actions. Click Finish to complete the installation. Figure 4-80 Productivity Center for Disk and Replication Base Installer - Finish Notepad opens and displays the post-installation tasks information. Read the information and complete any required tasks. 22.The Install Status window (Figure 4-81) opens after the successful Productivity Center for Disk and Replication Base installation. Click Next. Figure 4-81 Install Status for Productivity Center for Disk and Replication Base successful Chapter 4. Installing the IBM TotalStorage Productivity Center suite 139
  • 160. 4.3.4 IBM TotalStorage Productivity Center for Disk The next product to install is the Productivity Center for Disk as indicated in Figure 4-82. Click Next to begin the installation. Figure 4-82 IBM TotalStorage Productivity Center installer information 1. A window (Figure 4-83) opens that prompts you for the package location for CD-ROM labeled IBM TotalStorage Productivity Center for Disk. Enter the appropriate information and click Next. Figure 4-83 Productivity Center for Disk installation package location 140 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 161. 2. The next window that opens indicates that the IBM TotalStorage Productivity Center for Disk installer wizard will be launched (see Figure 4-84). Click Next. Figure 4-84 IBM TotalStorage Productivity Center for Disk installer 3. The Productivity Center for Disk Installer - Welcome panel (see Figure 4-85) opens. Click Next. Figure 4-85 IBM TotalStorage Productivity Center for Disk Installer Welcome Chapter 4. Installing the IBM TotalStorage Productivity Center suite 141
  • 162. 4. The Destination Directory panel (Figure 4-86) opens. Enter the directory path or accept the default directory and click Next. Figure 4-86 Productivity Center for Disk Installer - Destination Directory 5. The Installation Type panel (Figure 4-87) opens. Select Typical and click Next. Figure 4-87 Productivity Center for Disk - Installation Type 142 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 163. 6. The Create Local Database panel (Figure 4-88) opens. Accept the default database name of PMDATA or re-enter a new database name. Then click Next. Figure 4-88 IBM TotalStorage Productivity Center for Disk - Create Local Database Chapter 4. Installing the IBM TotalStorage Productivity Center suite 143
  • 164. 7. Review the information on the IBM TotalStorage Productivity Center for Disk – Preview panel (Figure 4-89) and click Install. Figure 4-89 IBM TotalStorage Productivity Center for Disk Installer - Preview 8. The installer creates the required database (see Figure 4-90) and installs the product. You see a progress bar for the Productivity Center for Disk installation status. Figure 4-90 Productivity Center for Disk DB2 database creation 144 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 165. 9. When the installation is complete, you see the Finish panel (Figure 4-91). Review the post installation tasks. Click Finish. Figure 4-91 Productivity Center for Disk Installer - Finish 10.The Install Status window (Figure 4-92) opens after the successful Productivity Center for Disk installation. Click Next. Figure 4-92 Install Status for Productivity Center for Disk successful Chapter 4. Installing the IBM TotalStorage Productivity Center suite 145
  • 166. 4.3.5 IBM TotalStorage Productivity Center for Replication A panel opens that indicates that the installation for IBM TotalStorage Productivity Center for Replication is about to begin (see Figure 4-93). Click Next to begin the installation. Figure 4-93 IBM TotalStorage Productivity Center installation overview 1. The Package Location for Replication Manager panel (Figure 4-94) opens. Enter the appropriate information and click Next. Figure 4-94 Productivity Center for Replication install package location 146 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 167. 2. The next window that opens indicates that the IBM TotalStorage Productivity Center for Replication installer wizard will be launched (see Figure 4-95). Click Next. Figure 4-95 Productivity Center for Replication installer 3. The Welcome window (Figure 4-96) opens. It suggests documentation that you can review prior to the installation. Click Next to continue or click Cancel to exit the installation. Figure 4-96 IBM TotalStorage Productivity Center for Replication Installer – Welcome Chapter 4. Installing the IBM TotalStorage Productivity Center suite 147
  • 168. 4. The Destination Directory panel (Figure 4-97) opens. Enter the directory path or accept the default directory. Click Next to continue. Figure 4-97 IBM TotalStorage Productivity Center for Replication Installer – Destination Directory 148 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 169. 5. The next panel (see Figure 4-98) asks you to select the installation type. Select the Typical radio button and click Next. Figure 4-98 Productivity Center for Replication Installer – Installation Type Chapter 4. Installing the IBM TotalStorage Productivity Center suite 149
  • 170. 6. In the Create Local Database for ‘Hardware’ Subcomponent window (see Figure 4-99), in the Database name field, enter a value for the new Hardware subcomponent database or accept the default. We recommend that you accept the default. Click Next. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents. Figure 4-99 IBM TotalStorage Productivity Center for Replication: Hardware subcomponent 150 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 171. 7. In the Create Local Database for ‘ElementCatalog’ Subcomponent window (see Figure 4-100), in the Database name field, enter for the new Element Catalog subcomponent database or accept the default. Click Next. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents. Figure 4-100 IBM TotalStorage Productivity Center for Replication: Element Catalog subcomponent Chapter 4. Installing the IBM TotalStorage Productivity Center suite 151
  • 172. 8. In the Create Local Database for ‘ReplicationManager’ Subcomponent window (see Figure 4-101), in the Database name field, enter the new Replication Manager subcomponent database or accept the default. Click Next. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents. Figure 4-101 TotalStorage Productivity Center for Replication: Replication Manager subcomponent 152 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 173. 9. In the Create Local Database for ‘ReplicationManager’ Subcomponent window (see Figure 4-102), in the Database name field, enter the new SVC hardware subcomponent database or accept the default. Click Next. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents. Figure 4-102 IBM TotalStorage Productivity Center for Replication: SVC Hardware subcomponent Chapter 4. Installing the IBM TotalStorage Productivity Center suite 153
  • 174. 10.The Setting Tuning Cycle Parameter window (Figure 4-103) opens. Accept the default value of tuning every 24 hours or change the value. You can change this value later in the ElementCatalog.properties file. Click Next. Figure 4-103 IBM TotalStorage Productivity Center for Replication: Database tuning cycle 154 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 175. 11.Review the information in the TotalStorage Productivity Center for Replication Installer – Preview panel (Figure 4-104). Click Install. Figure 4-104 IBM TotalStorage Productivity Center for Replication Installer – Preview Chapter 4. Installing the IBM TotalStorage Productivity Center suite 155
  • 176. 12.You see the Productivity Center for Replication Installer - Finish panel (see Figure 4-105) upon successful installation. Read the post installation tasks. Click Finish to complete the installation. Figure 4-105 Productivity Center for Replication Installer – Finish 13.The Install Status window (Figure 4-106) opens after the successful Productivity Center for Disk and Replication Base installation. Click Next. Figure 4-106 Install Status for Productivity Center for Replication successful 156 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 177. 4.3.6 IBM TotalStorage Productivity Center for Fabric Prior to installing IBM TotalStorage Productivity Center for Fabric, you must complete several prerequisite tasks. These tasks are described in detail in 3.11, “IBM TotalStorage Productivity Center for Fabric” on page 75. Specifically, complete the tasks in the following sections: 3.10, “Installing SNMP” on page 73 3.11.1, “The computer name” on page 75 Figure 3.11.2 on page 75 3.11.3, “Windows Terminal Services” on page 75 “User IDs and password considerations” on page 76 3.11.4, “Tivoli NetView” on page 76 3.11.5, “Personal firewall” on page 77 “Security considerations” on page 77 Installing the manager After successful installation of the Productivity Center for Replication, the Suite Installer begins the installation of Productivity Center for Fabric (see Figure 4-107). Click Next. Figure 4-107 IBM TotalStorage Productivity Center installation information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 157
  • 178. 1. The panel that opens prompts you to specify the location of the install package for Productivity Center for Fabric Manager (see Figure 4-108). Enter the appropriate path and click Next. Important: If you used the demonstration certificates, point to the CD-ROM drive. If you generated new certificates, point to the manager CD image with the new agentTrust.jks file. Figure 4-108 Productivity Center for Fabric install package location 158 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 179. 2. The next window that opens indicates that the IBM TotalStorage Productivity Center for Fabric installer wizard will be launched (see Figure 4-109). Click Next. Figure 4-109 Productivity Center for Fabric installer 3. A window opens in which you select the language to use for the wizard (see Figure 4-110). Select the required language and click OK. Figure 4-110 IBM TotalStorage Productivity Center for Fabric installer: Selecting the language Chapter 4. Installing the IBM TotalStorage Productivity Center suite 159
  • 180. 4. A panel opens asking you to select the type of installation you wish to perform (Figure 4-111). In this case, we install the IBM TotalStorage Productivity Center for Fabric code. You can also use the Suite Installer to perform remote deployment of the Fabric Agent. You can perform this operation only if you installed the common agent previously on machines. For example, you may have installed the Data Agent on the machines and want to add the Fabric Agent to the same machines. You must have the Fabric Manager installed before you can deploy the Fabric Agent. You cannot select both Fabric Manager Installation and Remote Fabric Agent Deployment at the same time. You can only select one option. Click Next. Figure 4-111 Fabric Manager installation 160 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 181. 5. The Welcome panel (Figure 4-112) opens. Click Next. Figure 4-112 IBM TotalStorage Productivity Center for Fabric: Welcome information 6. The next window that opens prompts you to confirm the target directory (see Figure 4-113). Enter the directory path or accept the default directory. Click Next. Figure 4-113 IBM TotalStorage Productivity Center for Fabric installation directory Chapter 4. Installing the IBM TotalStorage Productivity Center suite 161
  • 182. 7. In the next panel (see Figure 4-114), you specify the port number. This is a range of 25 port numbers for use by IBM TotalStorage Productivity Center for Fabric. The first port number that you specify is considered the primary port number. You only need to enter the primary port number. The primary port number and the next 24 numbers are reserved for use by IBM TotalStorage Productivity Center for Fabric. For example, if you specify port number 9550, IBM TotalStorage Productivity Center for Fabric uses port numbers 9550 through 9574. Ensure that the port numbers you use are not used by other applications at the same time. To determine which port numbers are in use on a particular computer, type either of the following commands from a command prompt. netstat -a netstat -an We recommend that you use the first of these two commands. The port numbers in use on the system are listed in the Local Address column of the output. This field has the format host:port. Enter the primary port number and click Next. Figure 4-114 IBM TotalStorage Productivity Center for Fabric port number 162 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 183. 8. As shown in Figure 4-115, select the database repository, either DB2 or Cloudscape. If you select DB2, you must have previously installed DB2 on the server. DB2 is the recommended installation option. Click Next. Figure 4-115 IBM TotalStorage Productivity Center for Fabric database selection type 9. In the next panel (see Figure 4-117 on page 164), select the WebSphere Application Server to use in the installation. WebSphere Application Server was installed as part of the prerequisite software so we chose the Non Embedded (Full) WebSphere Application Server option. If the Fabric manager is to be installed standalone on a server choose the Embedded WebSphere Application Server - Express option. Click Next. Figure 4-116 Productivity Center for Fabric WebSphere Application Server type selection Chapter 4. Installing the IBM TotalStorage Productivity Center suite 163
  • 184. 10.The Single/Multiple User ID/Password Choice panel (see Figure 4-117), using DB2, opens. If you select DB2 as your database, you see this panel. This panel allows you to use the DB2 administrative user ID and password for the DB2, WebSphere, Host Authentication, and NetView. If you select all the boxes, you are only prompted for the DB2 user ID and password which is used for all instances. In our install we only selected DB2 and NetView. A different user ID and password will be used for WebSphere and Host Authentication. Note: If you selected IBM Cloudscape as your database, this panel is not displayed. Click Next. Figure 4-117 IBM TotalStorage Productivity Center for Fabric user and password options 164 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 185. 11.The DB2 Administrator user ID and password panel (Figure 4-118), using DB2, opens. If you selected DB2 as your database, you see this panel. This panel allows you to use the DB2 administrative user ID and password for the DB2. The user ID and password specified during the DB2 installation in Figure 4-8 on page 90 was used in this example. Enter the required user ID and password. Click Next. The installer will verify that the user ID entered exists. Figure 4-118 IBM TotalStorage Productivity Center for Fabric database user information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 165
  • 186. 12.In the next window (see Figure 4-119) that opens, type the name of the new database in the Type database name: field or accept the default. In our install we accepted the default database name. Click Next. Note: The database name must be unique. You cannot share the IBM TotalStorage Productivity Center for Fabric database with any other applications. Figure 4-119 IBM TotalStorage Productivity Center for Fabric database name 13.Since we did not check the box for WebSphere in Figure 4-117 on page 164, the panel in Figure 4-120 on page 167 opens prompting for a WebSphere user ID and password. We used the tpcadmin user ID, which is what we used for the IBM Director service account (refer to Figure 4-46 on page 118). Enter the required information and click Next. 166 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 187. Figure 4-120 WebSphere Application Server user ID and password 14.Since we also did not check the box for Host Authentication (Figure 4-117 on page 164), the following panel (Figure 4-121) opens. Enter the password for Host Authentication. This password is used by the Fabric agents. Click Next. Figure 4-121 Host Authentication password 15.In the window (Figure 4-122 on page 168) that opens, enter the parameters for the Tivoli NetView drive name. Click Next. Chapter 4. Installing the IBM TotalStorage Productivity Center suite 167
  • 188. Figure 4-122 IBM TotalStorage Productivity Center for Fabric database drive information 16.The Agent Manager Information panel (Figure 4-123 on page 169) opens. You must complete the following fields: – Agent manager name or IP address: This is the host name or IP address of your Agent Manager. – Agent manager registration port: This is the port number of your Agent Manager. The default value is 9511. – Agent Manager public port: This is a public port. The default value is 9513. – Agent registration password (twice): This is the password used to register the common agent with the Agent Manager as shown in Figure 4-27 on page 105. If the password is not set and the default is accepted, the password is changeMe. This password is case sensitive. The agent registration password resides in the AgentManager.properties file where the Agent Manager is installed. It is located in the following directory: %WSAS_INSTALL_ROOT%InstalledApps<cell>AgentManager.earAgentManag er.warWEB-INFclassesresource – Resource manager registration user ID: This is the user ID used to register the resource manager with the Agent Manager. The default is manager. The Resource Manager registration user ID and password reside in the Authorization.xml file where the Agent Manager is installed. It is located in the following directory: <Agent_Manager_install_dir>config – Resource manager registration password (twice): This is the password used to register the resource manager with the Agent Manager. The default is password. Fill in the information and click Next. 168 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 189. Figure 4-123 IBM TotalStorage Productivity Center for Fabric Agent Manager information 17.The next panel (Figure 4-124) that opens provides information about the location and size of IBM TotalStorage Productivity Center for Fabric - Manager. Click Next. Figure 4-124 IBM TotalStorage Productivity Center for Fabric installation information 18.You see the Status panel. The installation can take about 15 to 20 minutes to complete. 19.When the installation has completed, you see a panel indicating that the wizard successfully installed the Fabric Manager (see Figure 4-125 on page 170). Click Next. Chapter 4. Installing the IBM TotalStorage Productivity Center suite 169
  • 190. Figure 4-125 IBM TotalStorage Productivity Center for Fabric installation status 20.In the next panel (see Figure 4-126), you are prompted to restart your computer. Select No, I will restart my computer later because you do not want to restart your computer now. Click Finish to complete the installation. Figure 4-126 IBM TotalStorage Productivity Center for Fabric restart options 21.The Install Status panel (see Figure 4-127 on page 171) opens. It indicates that the Productivity Center for Fabric installation was successful. Click Next. 170 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 191. Figure 4-127 IBM TotalStorage Productivity Center installation information 4.3.7 IBM TotalStorage Productivity Center for Data Prior to installing IBM TotalStorage Productivity Center for Data, you need to complete several prerequisite tasks. These tasks are described in detail in 3.12, “IBM TotalStorage Productivity Center for Data” on page 78. Specifically you must complete the tasks in the following sections: 3.12.1, “Server recommendations” on page 78 3.12.2, “Supported subsystems and databases” on page 78 3.12.3, “Security considerations” on page 79 3.12.4, “Creating the DB2 database” on page 81 The IBM TotalStorage Productivity Center for Data database needs to be created before you begin the installation. This section provides an overview of the steps you need to perform when installing IBM TotalStorage Productivity Center for Data. Important: Make sure that the Tivoli Agent Manager service is started before you begin the installation. You see the panel indicating that the installation of Productivity Center for Data - Manager is about to begin (see Figure 4-128 on page 172). Click Next to begin the installation. Chapter 4. Installing the IBM TotalStorage Productivity Center suite 171
  • 192. Figure 4-128 IBM TotalStorage Productivity Center for Data installation information 1. In the window that opens, you are prompted to enter the install package location for IBM TotalStorage Productivity Center for Data (see Figure 4-129). Enter the appropriate information and click Next. Figure 4-129 Productivity Center for Data install package location 2. The next window that opens indicates that the IBM TotalStorage Productivity Center for Data installer wizard will be launched (see Figure 4-130 on page 173). Click Next. 172 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 193. Figure 4-130 Productivity Center for Data installer 3. In the next panel (see Figure 4-131), select Install Productivity Center for Data and click Next. Figure 4-131 IBM TotalStorage Productivity Center for Data install window Chapter 4. Installing the IBM TotalStorage Productivity Center suite 173
  • 194. 4. Read the License Agreement shown in Figure 4-132. Indicate your acceptance of the agreement by selecting the I have read and AGREE to abide by the license agreement above check box. Then click Next. Figure 4-132 IBM TotalStorage Productivity Center for Data license agreement 5. The next panel asks you to confirm that you read the license agreement (see Figure 4-133). Click Yes to indicate that you have read and accepted the license agreement. Figure 4-133 Confirmation the Productivity Center for Data license agreement has been read 174 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 195. 6. The next window shown in Figure 4-134 allows you to choose the type of installation that you are performing. Select The Productivity Center for Data Server and an Agent on this machine. This installs the server, agent, and user interface components on the machine where the installation program is running. You must install the server on at least one machine within your environment. Click Next. Figure 4-134 IBM TotalStorage Productivity Center for Data selection options Chapter 4. Installing the IBM TotalStorage Productivity Center suite 175
  • 196. 7. Review and enter the license key for the appropriate functions if required. See Figure 4-135. Click Next. Figure 4-135 IBM TotalStorage Productivity Center for Data license key information 176 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 197. 8. The installation program validates the license key and you are asked to select the relational database management system (RDBMS) that you want to host the Data Manager repository. See Figure 4-136. The repository is a set of relational database tables where Data Manager builds a database of statistics to keep track of your environment. For our installation, we select IBM DB2 UDB. Click Next. Figure 4-136 IBM TotalStorage Productivity Center for Data database selection 9. The Create Service Account panel opens to create the TSRMsrv1 local account. Click Yes. Figure 4-137 Create Service Account Chapter 4. Installing the IBM TotalStorage Productivity Center suite 177
  • 198. 10.In the next window (see Figure 4-138), complete these tasks: a. Select the database that was created as a prerequisite. Refer to 3.12.4, “Creating the DB2 database” on page 81. b. Fill in the required user ID and password. This is the DB2 user ID and password defined previously. c. Click Next. Figure 4-138 IBM TotalStorage Productivity Center for Data database selection option 11.The Repository Creation Parameters panel (see Figure 4-139 on page 179) for UDB opens. On this panel you can specify the database schema and tablespace name. If you are using DB2 as the repository, you can also choose how you will manage the database space: – System Managed (SMS): This option indicates that the space is managed by the OS. In this case you specify the Container Directory, which is then managed by the system, and can grow as large as the free space on the file system. Tip: If you do not have in house database skills, the System Managed approach is recommended. – Database Managed (DMS): This option means that the space is managed by the database. In this case you need to specify the Container Directory, Container File, and Size fields. The Container File specifies a filename for the repository, and Size is the predefined space for that file. You can later change this by using the ALTER TABLESPACE command. We accepted the defaults. 178 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 199. Tip: We recommend that you use meaningful names for Container Directory and Container File at installation. This can help you in case you need to find the Container File. Enter the necessary information and click Next. Figure 4-139 IBM TotalStorage Productivity Center for Data repository information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 179
  • 200. 12.The Productivity Center for Data Parameters panel (Figure 4-140) opens. Use the Agent Manager Parameters window (Figure 4-141 on page 182) to provide information about the Agent Manager installed in your environment. Click Next. Figure 4-140 IBM TotalStorage Productivity Center for Data installation parameters 180 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 201. 13.The Agent Manager Parameters panel (Figure 4-141 on page 182) provides information about the Agent Manager installed in your environment. Table 4-1 provides a description of the fields in the panel. Table 4-1 Agent Manager Parameters descriptions Field Description Hostname Enter the fully qualified network name or IP address of the Agent Manager server as seen by the agents. Registration Port Enter the port number of the Agent Manager. The default is 9511. Public Port Enter the public port for Agent Manager. The default is 9513. Resource Manager Username Enter the Agent Manager user ID. This is the user ID used to register the common agent with the Agent Manager. The default is manager. Resource Manager Password Enter the password used to register the common agent with the Agent Manager. Agent Registration password This is the password that was set during the Tivoli Agent Manager installation Figure 4-27 on page 105. The default password is changeMe, the password is stored in the AgentManager.properties file, in the %install dir%AgentManagerimage directory. Click Next. Note: If an error is displayed during this part of the installation, verify that the Agenttrust.jks file was copied across and verify the Agent Registration password. Chapter 4. Installing the IBM TotalStorage Productivity Center suite 181
  • 202. Figure 4-141 IBM TotalStorage Productivity Center for Data Agent Manager install information 182 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 203. 14.Use the NAS Discovery Parameters panel in Figure 4-142 to configure Data Manager for use with any network-attached storage (NAS) devices in your environment. Click Next. You can leave the fields blank if you do not have any NAS devices. Figure 4-142 IBM TotalStorage Productivity Center for Data NAS options Chapter 4. Installing the IBM TotalStorage Productivity Center suite 183
  • 204. 15.The Space Requirements panel for the Productivity Center for Data Server (Figure 4-143) opens. Enter the directory path or accept the default directory. If the current disk or device does not have enough space for the installation, then you can enter a different location for the installation in the Choose the installation directory field. Or you can click Browse to browse your system for an available and appropriate space. The default installation directory is C:Program FilesIBMTPCData. Click Next. Figure 4-143 IBM TotalStorage Productivity Center for Data installation destination options 16.Confirm the path for installing the Productivity Center for Data Server as shown in Figure 4-144. At this point, the installation process has gathered all of the information that is needed to perform the installation. Click OK. Figure 4-144 IBM TotalStorage Productivity Center for Data Server destination path confirmation 184 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 205. 17.Review and change the Productivity Center for Data Agent Parameters (see Figure 4-145) as required. We recommend that you accept the defaults. Click Next. Figure 4-145 IBM TotalStorage Productivity Center for Data agent parameters Chapter 4. Installing the IBM TotalStorage Productivity Center suite 185
  • 206. 18.The Windows Service Account panel shown in Figure 4-146 opens. Choose Create a local account for the agent to run under and click Next. Figure 4-146 Windows Service Account 186 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 207. 19.The Space Requirements panel (see Figure 4-147) opens for the Productivity Center for Data Agent. Enter the directory path or accept the default directory. If the current disk or device does not have enough space for the installation, then you can enter a different location for the installation in the Choose the Common Agent installation directory field. Or you can click Browse to browse your system for an available and appropriate space. The default installation directory is C:Program FilesTivoliep. Click Next. Figure 4-147 IBM TotalStorage Productivity Center for Datacommon agent installation information 20.When you see a message similar to the one in Figure 4-148, confirm the path where Productivity Center for Data Agent is to be installed. At this point, the installation process has gathered all of the information necessary to perform the installation. Click OK. Figure 4-148 IBM TotalStorage Productivity Center for Data Agent destination path confirmation Chapter 4. Installing the IBM TotalStorage Productivity Center suite 187
  • 208. 21.When you see a window similar to the example in Figure 4-149, review the choices that you have made. Then click Next. Figure 4-149 IBM TotalStorage Productivity Center for Data preview options 188 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 209. 22.A window opens that tracks the progress of the installation (see Figure 4-150). Figure 4-150 IBM TotalStorage Productivity Center for Data installation information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 189
  • 210. 23.When the installation is done, the progress window shows a message indicating that the installation completed successfully (see Figure 4-151). Review this panel and click Done. Figure 4-151 IBM TotalStorage Productivity Center for Data success information 24.The Install Status panel opens showing the message The Productivity Center for Data installation was successful. Click Next to complete the installation. Figure 4-152 Install Status for Productivity Center for Data successful 190 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 211. 5 Chapter 5. CIMOM install and configuration This chapter provides a step-by-step guide to configure the Common Information Model Object Manager (CIMOM), LSI Provider, and Service Location Protocol (SLP) that are required to use the IBM TotalStorage Productivity Center. © Copyright IBM Corp. 2005. All rights reserved. 191
  • 212. 5.1 Introduction After you have completed the installation of TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Fabric, or TotalStorage Productivity Center for Data, you will need to install and configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) agents. Note: For the remainder of this chapter, we refer to the TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Fabric, and TotalStorage Productivity Center for Data simply as the TotalStorage Productivity Center. The TotalStorage Productivity Center uses SLP as the method for CIM clients to locate managed objects. The CIM clients may have built in or external CIM agents. When a CIM agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. In this chapter we describe the steps for: Planning considerations for Service Location Protocol (SLP) SLP configuration recommendation General performance guidelines Planning considerations for CIMOM Installing and configuring CIM agent for Enterprise Storage Server and DS6000/DS8000 Verifying connection to ESS Verify connection to DS6000/DS8000 Setting up Service Location Protocol Directory Agent (SLP DA) Installing and configuring CIM agent for DS 4000 Family Configuring CIM agent for SAN Volume Controller 5.2 Planning considerations for Service Location Protocol The Service Location Protocol (SLP) has three major components, Service Agent (SA) and User Agent (UA) and a Directory Agent (DA). The SA and UA are required components and DA is an optional component. You may have to make a decision whether to use SLP DA in your environment based on considerations as described below. 5.2.1 Considerations for using SLP DAs You may consider to use a DA to reduce the amount of multicast traffic involved in service discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. 192 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 213. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and time-outs are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You may consider to use DAs in your enterprise if any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or time-outs during multicast service request. You would like to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. Your network does not have multicast enabled and consists of multiple subnets that must share services. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request. 5.2.2 SLP configuration recommendation Some configuration recommendations are provided for enabling TotalStorage Productivity Center to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This topic discusses router configuration and SLP directory agent configuration. Router configuration Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center. To configure your router hardware and software, refer to your router reference and configuration documentation. Attention: Routers are sometimes configured to prevent passing of multicast packets between subnets. Routers configured this way prevent discovery of systems between subnets using multicasting. Routers can also be configured to restrict the minimum multicast TTL (time-to-live) for packets it passes between subnets, which can result in the need to set the Multicast TTL higher to discover systems on the other subnets of the router. The Multicast TTL controls the time-to-live for the multicast discovery packets. This value typically corresponds to the number of times a packet is forwarded between subnets, allowing control of the scope of subnets discovered. Multicast discovery does not discover Director V1.x systems or systems using TCP/IP protocol stacks that do not support multicasting (for example, some older Windows 3.x and Novell 3.x TCP/IP implementations). Chapter 5. CIMOM install and configuration 193
  • 214. SLP directory agent configuration Configure the SLP directory agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the user agent. Therefore, it is possible to configure one DA for each subnet that contains storage devices that are to be discovered byTotalStorage Productivity Center. One DA is sufficient for each subnet. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow TotalStorage Productivity Center to discover all of the devices, it needs to be statically configured with the addresses of each of these DAs. This can be accomplished using the TotalStorage Productivity Center This setup is described in “Configuring TotalStorage Productivity Center for SLP discovery” on page 223. 5.3 General performance guidelines Here are some general performance considerations for configuring the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication environment. Do not overpopulate the SLP discovery panel with SLP agent hosts. Remember that TotalStorage Productivity Center for Disk includes a built-in SLP User Agent (UA) that will receive information about SLP Service Agents and Directory Agents (DA) that reside in the same subnet as the TotalStorage Productivity Center for Disk installation. You should have not more than one DA per subnet. Misconfiguring the IBM Director discovery preferences may impact performance on auto discovery or on device presence checking. It may also result in application time-outs, as attempts are made to resolve and communicate with hosts that are not available. It should be considered mandatory to run the ESS CLI and ESS CIM agent or DS CIM agent, and LSI Provider software on another host of comparable size to the main TotalStorage Productivity Center server. Attempting to run a full TotalStorage Productivity Center implementation (Disk Manager, Data Manager, Fabric Manager, Replication Manager, DB2, IBM Director and the WebSphere Application server) on the same host as the CIM agent, will result in dramatically increased wait times for data retrieval. You may also experience resource contention and port conflicts. 5.4 Planning considerations for CIMOM The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be installed on the same server that supports the device user interface. Figure 5-1 on page 195 shows overview of CIM agent. 194 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 215. Figure 5-1 CIM Agent overview You may plan to install CIM agent code on the same server which also has device management interface or you may install it on a separate server. Attention: At this time only few devices come with an integrated CIM Agent, most devices need a external CIMOM for CIM enable management applications (CIM Clients) to be able to communicate with device. For the ease of the installation IBM provides an ICAT (short for Integrated Configuration Agent Technology) which is a bundle that mainly includes the CIMOM, the device provider and an SLP SA. 5.4.1 CIMOM configuration recommendations Following recommendations are based on our experience in ITSO Lab environment: The CIMOM agent code which you are planning to use, must be supported by the installed version of TotalStorage Productivity Center. You may refer to the link below for the latest updates: http://www-1.ibm.com/servers/storage/support/software/tpc/ You must have the CIMOM supported firmware level on the storage devices. It you have an incorrect version of firmware, you may not be able to discover and manage any the storage devices. The data traffic between CIMOM agent and device can be very high, especially during performance data collection. Hence it is recommended to have a dedicated server for the CIMOM agent. Although, you may configure the same CIMOM agent for multiple devices of same type. You may also plan to locate this server within same data center where storage devices are located. This is in consideration to firewall port requirements. Typically, it is best practice to minimize firewall port openings between data center and external network. If you consolidate the CIMOM servers within the data center then you may be able to minimize the need to open the firewall ports only for TotalStorage Productivity Center communication with CIMOM. Co-location of CIM agent instances of the differing type on the same server is not recommended because of resource contention. It is strongly recommended to have a separate and dedicated servers for CIMOM agents and TotalStorage Productivity Center. This is due to resource contention, TCP/IP port requirements and system services co-existence. Chapter 5. CIMOM install and configuration 195
  • 216. 5.5 Installing CIM agent for ESS Before starting Multiple Device Manager discovery, you must first configure the Common Information Model Object Manager (CIMOM) for ESS. The ESS CIM Agent package is made up of the following parts (see Figure 5-2). Figure 5-2 ESS CIM Agent Package This section provides an overview of the installation and configuration of the ESS CIM Agent on a Windows 2000 Advanced Server operating system. 5.5.1 ESS CLI Install The following list of installation and configuration tasks are in the order in which they should be performed: Before you install the DS CIM Agent you must install the IBM TotalStorage Enterprise Storage System Command Line Interface (ESS CLI) if you plan to manage 2105-F20s or 2105-800s with this CIM agent. The DS CIM Agent installation program checks your system for the existence of the ESS CLI and provides the warning shown in Figure 5-16 on page 205 if no valid ESS CLI is found. Attention: If you are upgrading from a previous version of the ESS CIM Agent, you must uninstall the ESS CLI software that was required by the previous CIM Agent and reinstall the latest ESS CLI software, you must have a minimum ESS CLI level of 2.4.0.236. Perform the following steps to install the ESS CLI for Windows: 1. Insert the CD for the ESS CLI in the CD-ROM drive, run the setup and follow the instructions as shown in Figure 5-3 on page 197 through Figure 5-11 on page 201. Note: The ESS CLI installation wizard detects if you have an earlier level of the ESS CLI software installed on your system and uninstalls the earlier level. After you uninstall the previous version, you must restart the ESS CLI installation program to install the current level of the ESS CLI. 196 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 217. Figure 5-3 ESS CLI InstallShield Wizard I 2. Select I accept the terms of the license agreement and click Next. Figure 5-4 ESS CLI License agreement 3. Click Next. Chapter 5. CIMOM install and configuration 197
  • 218. Figure 5-5 ESS CLI choose target system panel 4. Click Next. Figure 5-6 ESS CLI Setup Status panel 5. Click Next. 198 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 219. Figure 5-7 ESS CLI selected options summary Figure 5-8 ESS CLI Installation Progress 6. Click Next. Chapter 5. CIMOM install and configuration 199
  • 220. Figure 5-9 ESS CLI installation complete panel 7. Read the information and click Next. Figure 5-10 ESS CLI Readme 8. Reboot your system before proceeding with the ESS CIM Agent installation. You must do this because the ESS CLI is dependent on environmental variable settings which will not be in effect for the ESS CIM Agent. This is because the CIM Agent runs as a service unless you reboot your system. 200 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 221. Figure 5-11 ESS CLI Reboot panel 9. Verify that the ESS CLI is installed: – Click Start → Settings → Control Panel. – Double-click the Add/Remove Programs icon. – Verify that there is an IBM ESS CLI entry. 10.Verify that the ESS CLI is operational and can connect to the ESS. For example, from a command prompt window, issue the following command: esscli -u userid -p password -s 9.1.11.111 list server Where: – 9.1.11.111 represents the IP address of the Enterprise Storage Server – usedid represents the Enterprise Storage Server Specialist user name – password represents the Enterprise Storage Server Specialist password for the user name Figure 5-12 shows the response from the esscli command. Figure 5-12 ESS CLI verification Chapter 5. CIMOM install and configuration 201
  • 222. 5.5.2 DS CIM Agent install To install the DS CIM Agent in your Windows system, perform the following steps: 1. Log on to your system as the local administrator. 2. Insert the CIM Agent for DS CD into the CD-ROM drive. The Install Wizard launchpad should start automatically, if you have autorun mode set on your system. You should see a launchpad window similar to Figure 5-13. 3. You may review the Readme file from the launchpad menu. Subsequently, you can click Installation Wizard. The Installation Wizard starts executing the setup.exe program and shows the Welcome panel in Figure 5-14 on page 203. Note: The DS CIM Agent program should start within 15 - 30 seconds if you have autorun mode set on your system. If the installer window does not open, perform the following steps: – Use a Command Prompt or Windows Explorer to change to the Windows directory on the CD. – If you are using a Command Prompt window, run launchpad.bat. – If you are using Windows Explorer, double-click on the launchpad.bat file. Note: If you using CIMOM code from IBM download Web site and not from the distribution CD, then you must ensure to use a shorter windows directory pathname. Executing Launchpad.bat from the longer pathname may fail. An example of a short pathname is C:CIMOMsetup.exe. Figure 5-13 DSCIM Agent launchpad 202 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 223. 4. The Welcome window opens suggesting what documentation you should review prior to installation. Click Next to continue (see Figure 5-14). Figure 5-14 DS CIM Agent welcome window Chapter 5. CIMOM install and configuration 203
  • 224. 5. The License Agreement window opens. Read the license agreement information. Select I accept the terms of the license agreement, then click Next to accept the license agreement (see Figure 5-15). Figure 5-15 DS CIM Agent license agreement 204 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 225. The window shown in Figure 5-16 only appears if no valid ESS CLI installed. If you do not plan to manage an ESS from this CIM agent, then click Next. Important: If you plan to manage an ESS from this CIM agent, then click Cancel. Install the ESS CLI following the instructions in 5.5.1, “ESS CLI Install” on page 196. Figure 5-16 DS CIM Agent ESS CLI warning Chapter 5. CIMOM install and configuration 205
  • 226. 6. The Destination Directory window opens. Accept the default directory and click Next (see Figure 5-17). Figure 5-17 DS CIM Agent destination directory panel 206 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 227. 7. The Updating CIMOM Port window opens (see Figure 5-18). You Click Next to accept the default port if it available and free in your environment. For our ITSO setup we used default port 5989. Note: If the default port is the same as another port already in use, modify the default port and click Next. Use the following command to check which ports are in use: netstat -a Figure 5-18 DS CIM Agent port window Chapter 5. CIMOM install and configuration 207
  • 228. 8. The Installation Confirmation window opens (see Figure 5-19). Click Install to confirm the installation location and file size. Figure 5-19 DS CIM Agent installation confirmation 208 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 229. 9. The Installation Progress window opens (see Figure 5-20) indicating how much of the installation has completed. Figure 5-20 DS CIM Agent installation progress 10.When the Installation Progress window closes, the Finish window opens (see Figure 5-21 on page 210). Check the View post installation tasks check box if you want to view the post installation tasks readme when the wizard closes. We recommend you review the post installation tasks. Click Finish to exit the installation wizard (Figure 5-21 on page 210). Note: Before proceeding, you might want to review the log file for any error messages. The log file is located in xxxlogsinstall.log, where xxx is the destination directory where the DS CIM Agent for Windows is installed. Chapter 5. CIMOM install and configuration 209
  • 230. Figure 5-21 DS CIM Agent install successful 11.If you checked the view post installation tasks box, then the window shown in Figure 5-22 appears. Close the window when you have finished reviewing the post installation tasks. Figure 5-22 DS CIM Agent post install readme The launchpad window (Figure 5-13 on page 202) appears. Click Exit. 210 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 231. 5.5.3 Post Installation tasks Continue with the following post installation tasks for the ESS CIM Agent. Verify the installation of the SLP Proceed as follows: Verify that the Service Location Protocol is started. Select Start → Settings → Control Panel. Double-click the Administrative Tools icon. Double-click the Services icon. Find Service Location Protocol in the Services window list. For this component, the Status column should be marked Started as shown in Figure 5-23. Figure 5-23 Verify Service Location Protocol started If SLP is not started, right-click on the SLP and select Start from the pop-up menu. Wait for the Status column to be changed to Started. Verify the installation of the DS CIM Agent Proceed as follows: Verify that the CIMOM service is started. If you closed the Services window, select Start → Settings → Control Panel. Double-click the Administrative Tools icon. Double-click the Services icon. Find the IBM CIM Object Manager - ESS in the Services window list. For this component, the Status column should be marked Started and the Startup Type column should be marked Automatic, as shown in Figure 5-24 on page 212. Chapter 5. CIMOM install and configuration 211
  • 232. Figure 5-24 DS CIM Object Manager started confirmation If the IBM CIM Object Manager is not started, right-click on the IBM CIM Object Manager - ESS and select Start from the pop-up menu. Wait for the Status column to change to Started. If you are able to perform all of the verification tasks successfully, the ESS CIM Agent has been successfully installed on your Windows system. Next, perform the configuration tasks. 5.6 Configuring the DS CIM Agent for Windows This task configures the DS CIM Agent after it has been successfully installed. Perform the following steps to configure the DS CIM Agent: Configure the ESS CIM Agent with the information for each Enterprise Storage Server the ESS CIM Agent is to access. – Start → Programs → IBM TotalStorage CIM Agent for ESS → CIM agent for the IBM TotalStorage DS Open API → Enable DS Communications as shown in Figure 5-25. Figure 5-25 Configuring the ESS CIM Agent 5.6.1 Registering DS Devices Type the following command for each DS server that is configured: addessserver <ip> <user> <password> Where: – <ip> represents the IP address of the Enterprise Storage Server – <user> represents the DS Storage Server user name – <password> represents the DS Storage Server password for the user name 212 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 233. Repeat the previous step for each additional DS device that you want to configure. Note: CIMOM collects and caches the information from the defined DS servers at startup time; the starting of the CIMOM might take a longer period of time the next time you start it. Attention: If the username and password entered is incorrect or the DS CIM agent does not connect to the DS this will cause a error and the DS CIM Agent will not start and stop correctly, use following command to remove the ESS entry that is causing the problem and reboot the server. – rmessserver <ip> Whenever you add or remove DS from CIMOM registration, you must re-start the CIMOM to pick up the updated DS device list. 5.6.2 Registering ESS Devices Proceed as follows: Type the command addess <ip> <user> <password> command for each ESS (as shown in Figure –): – Where: <ip> represents the IP address of the cluster of Enterprise Storage Server – <user> represents the Enterprise Storage Server Specialist user name – <password> represents the Enterprise Storage Server Specialist password for the user name. The addess command example is shown in Figure 5-26 on page 214. Important: DS CIM agent relies on ESS CLI connectivity from DS CIMOM server to ESS devices. Make sure that the ESS devices you are registering are reachable and available at this point. It is recommended to verify this by launching ESS specialist browser from the ESS CIMOM server. You may logon to both ESS clusters for each ESS and make sure you are authenticated with correct ESS passwords and IP addresses. If the ESS are on the different subnet than the ESS CIMOM server and behind a firewall, then you must authenticate through firewall first before registering the ESS with CIMOM. If you have a bi-directional firewall between ESS devices and CIMOM server then you must verify the connection using rsTestConnection command of ESS CLI code. If the ESS CLI connection is not successful, you must authenticate through the firewall in both directions i.e from ESS to CIMOM server and also from CIMOM server to ESS. Once you are satisfied that you are able to authenticate and receive ESS CLI heartbeat with all the ESS successfully, you may proceed for entering ESS IP addresses. If CIMOM agent fails to authenticate with ESSs, then it will not start-up properly and may be very slow, since it re-tries the authentication. Chapter 5. CIMOM install and configuration 213
  • 234. Figure 5-26 The addess command example 5.6.3 Register ESS server for Copy services Type the following command for each ESS server that is configured for Copy Services: addesserver <ip> <user> <password> – Where <ip> represents the IP address of the Enterprise Storage Server – <user> represents the Enterprise Storage Server Specialist user name – <password> represents the Enterprise Storage Server Specialist password for the user name Repeat the previous step for each additional ESS device that you want to configure. Close the setdevice interactive session by typing exit. Once you have defined all the ESS servers, you must stop and restart the CIMOM to make the CIMOM initialize the information for the ESS servers. Note: CIMOM collects and caches the information from the defined ESS servers at startup time, the starting of the CIMOM might take a longer period of time the next time you start it. Attention: If the username and password entered is incorrect or the ESS CIM agent does not connect to the ESS this will cause a error and the ESS CIM Agent will not start and stop correctly, use following command to remove the ESS entry that is causing the problem and reboot the server. – rmessserver <ip> Whenever you add or remove an ESS from CIMOM registration, you must re-start the CIMOM to pick up updated ESS device list. 214 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 235. 5.6.4 Restart the CIMOM Perform these steps to use the Windows Start Menu facility to stop and restart the CIMOM. This is required so that CIMOM can register new devices or un-register deleted devices: Stop the CIMOM by selecting Start → Programs → CIM Agent for the IBM TotalStorage DS Open API → Stop CIMOM service. A Command Prompt window opens to track the stoppage of the CIMOM (as shown in Figure 5-27). If the CIMOM has stopped successfully, the following message is displayed: Figure 5-27 Stop ESS CIM Agent Restart the CIMOM by selecting Start → Programs → CIM Agent for the IBM TotalStorage DS Open API → Start CIMOM service. A Command Prompt window opens to track the progress of the starting of the CIMOM. If the CIMOM has started successfully, the message shown in Figure 5-28 is displayed. Figure 5-28 Restart ESS CIM Agent Note: The restarting of the CIMOM may take a while because it is connecting to the defined ESS servers and is caching that information for future use. 5.6.5 CIMOM user authentication Use the setuser interactive tool to configure the CIMOM for the users who will have the authority to use the CIMOM. The user is the TotalStorage Productivity Center for Disk and Replication superuser. Important: A TotalStorage Productivity Center for Disk and Replication superuserid and password must be create. This userid is initially used to by TotalStorage Productivity Center to connect to the CIM Agent. It is easiest if this superuserid is used for all CIM Agents. It can be set individually for each CIM Agent if necessary. This user ID should be less than or equal to eight characters. Upon installation of the CIM Agent for ESS, the provided default user name is “superuser” with a default password of “passw0rd”. The first time you use the setuser tool, you must use this user name and password combination. Once you have defined other user names, you can start the setuser command by specifying other defined CIMOM user names. Chapter 5. CIMOM install and configuration 215
  • 236. Note: The users which you configure to have authority to use the CIMOM are uniquely defined to the CIMOM software and have no required relationship to operating system user names, the ESS Specialist user names, or the ESS Copy Services user names. Here is the procedure: – Open a Command Prompt window and change directory to the ESS CIM Agent directory, for example “C:Program FilesIBMcimagent”. – Type the command setuser -u superuser -p passw0rd at the command prompt to start the setuser interactive session to identify users to the CIMOM. – Type the command adduser cimuser cimpass in the setuser interactive session to define new users. • where cimuser represents the new user name to access the ESS CIM Agent CIMOM • cimpass represents the password for the new user name to access the ESS CIM Agent CIMOM Close the setdevice interactive session by typing exit. For our ITSO Lab setup we used TPCCIMOM as superuser and TPCCIMOM as the password. 5.7 Verifying connection to the ESS During this task the ESS CIM Agent software connectivity to the Enterprise Storage Server (ESS) is verified. The connection to the ESS is through the ESS CLI software. If the network connectivity fails or if the user name and password that you set in the configuration task is incorrect, the ESS CIM Agent cannot connect successfully to the ESS. The installation, verification, and configuration of the ESS CIM Agent must be completed before you verify the connection to the ESS. Verify that you have network connectivity to the ESS from the system where the ESS CIM Agent is installed. Issue a ping command to the ESS and check that you can see reply statistics from the ESS IP address. Verify that the SLP is active by selecting Start → Settings → Control Panel. Double-click the Administrative Tools icon. Double-Click the Services icon. You should see similar to Figure 5-23 on page 211. Ensure that Status is Started. Verify that the CIMOM is active by selecting Start → Settings → Control Panel → Administrative Tools → Services. Launch Services panel and select IBM CIM Object Manager service. Verify the Status is shown as Started, as shown in Figure 5-29 on page 217. 216 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 237. Figure 5-29 Verify ESS CIMOM has started Verify that SLP has dependency on CIMOM, this is automatically configured when you installed the CIM agent software. Verify this by selecting Start → Settings → Control Panel. Double-click the Administrative Tools icon. Double-Click the Services icon and subsequently select properties on Service Location Protocol as shown in Figure Figure 5-30. Figure 5-30 SLP properties panel Click Properties and select the Dependencies tab as shown in Figure 5-31 on page 218. You must ensure that IBM CIM Object Manager has a dependency on Service Location Protocol (this should be the case by default). Chapter 5. CIMOM install and configuration 217
  • 238. Figure 5-31 SLP dependency on CIMOM Verify CIMOM registration with SLP by selecting Start →Programs →CIM Agent for the IBM TotalStorage DS Open API → Check CIMOM Registration. A window opens displaying the WBEM services as shown in Figure 5-32. These services have either registered themselves with SLP or you explicitly registered them with SLP using slptool. If you changed the default ports for a CIMOM during installation, the port number should be correctly listed here. It may take some time for a CIM Agent to register with SLP. Figure 5-32 Verify CIM Agent registration with SLP Note: If the verification of the CIMOM registration is not successful, stop and restart the SLP and CIMOM services. Note that the ESS CIMOM will attempt to contact each ESS registered to it. Therefore, the startup may take some time, especially if it is not able to connect and authenticate to any of the registered ESSs. Use the verifyconfig -u superuser -p passw0rd command, where superuser is the user name and passw0rd is the password for the user name that you configured to manage the CIMOM, to locate all WBEM services in the local network. You need to define the TotalStorage Productivity Center for Disk superuser name and passw0rd in order for TotalStorage Productivity Center for Disk to have the authority to manage the CIMOM. The verifyconfig command checks the registration for the ESS CIM Agent and checks that it can 218 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 239. connect to the ESSs. At ITSO Lab we had configured two ESSs (as shown in Figure 5-33 on page 219). Figure 5-33 The verifyconfig command 5.7.1 Problem determination You might run into the some errors. If that is the case, you may verify with the cimom.log file. This file is located in C:Program FilesIBMcimagent directory. You may verify that you have the entries with your current install timestamp as shown in Figure 5-34. The entries of specific interest are: CMMOM050OI Registered service service:wbem:https://x.x.x.x:5989 with SLP SA CMMOM0409I Server waiting for connections This first entry indicates that the CIMOM has successfully registered with SLP using the port number specified at ESS CIM agent install time, and the second entry indicates that it has started successfully and waiting for connections. Figure 5-34 CIMOM Log Success Chapter 5. CIMOM install and configuration 219
  • 240. If you still have problems, Refer to the DS Open Application Programming Interface Reference for an explanation and resolution of the error messages. You can find this Guide in the doc directory at the root of the CIM Agent CD. 5.7.2 Confirming the ESS CIMOM is available Before you proceed, you need to be sure that the DS CIMOM is listening for incoming connections. To do this run a telnet command from the server where TotalStorage Productivity Center for Disk resides. A successful telnet on the configured port (as indicated by a black screen with cursor on the top left) will tell you that the DS CIMOM is active. You selected this port during DS CIMOM code installation. If the telnet connection fails, you will have a panel like the one shown in Figure 5-35. In such case, you have to investigate the problem until you get a blank screen for telnet port. Figure 5-35 Example of telnet fail connection Another method to verify that DS CIMOM is up and running is to use the CIM Browser interface. For Windows machines change the working directory to c:Program Filesibmcimagent and run startcimbrowser. The WBEM browser in Figure 5-36 will appear. The default user name is superuser and the default password is passw0rd. If you have already changed it, using the setuser command, the new userid and password must be provided. This should be set to the TotalStorage Productivity Center for Disk userid and password. Figure 5-36 WBEM Browser 220 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 241. When login is successful, you should see a panel like the one in Figure 5-37. Figure 5-37 CIMOM Browser window 5.7.3 Setting up the Service Location Protocol Directory Agent You can use the following procedure to set up the Service Location Protocol (SLP) Directory Agent (DA) so that TotalStorage Productivity Center for Disk can discover devices that reside in subnets other than the one in which TotalStorage Productivity Center for Disk resides. Perform the following steps to set up the SLP DAs: 1. Identify the various subnets that contain devices that you want TotalStorage Productivity Center for Disk to discover. 2. Each device is associated with a CIM Agent. There might be multiple CIM Agents for each of the identified subnets. Pick one of the CIM Agents for each of the identified subnets. (It is possible to pick more than one CIM Agent per subnet, but it is not necessary for discovery purposes.) 3. Each of the identified CIM Agents contains an SLP service agent (SA), which runs as a daemon process. Each of these SAs is configured using a configuration file named slp.conf. Perform the following steps to edit the file: – For example, if you have DS CIM agent installed in the default install directory path, then go to the C:Program FilesIBMcimagentslp directory. – Look for file named slp.conf. – Make a backup copy of this file and name it slp.conf.bak. Chapter 5. CIMOM install and configuration 221
  • 242. – Open the slp.conf file and scroll down until you find (or search for) the line ;net.slp.isDA = true Remove the semi-colon (;) at the beginning of the line. Ensure that this property is set to true (= true) rather than false. Save the file. – Copy this file (or replace it if the file already exists) to the main windows subdirectory for Windows machines (for example c:winnt), or in the /etc directory for UNIX machines. 4. It is recommended to reboot the SLP server at this stage. Otherwise, alternatively, you may choose to restart the SLP and CIMOM services. You can do this from your Windows desktop → Start Menu → Settings → Control Panel → Administrative tools → Services. Launch the Services GUI → Locate the Service Location Protocol, right click and select stop. It will pop-up another panel which will request to stop IBM CIM Object Manager service. You may click Yes. You may start the SLP daemon again after it has stopped successfully. Alternatively, you may choose to re-start the CIMOM using command line as described in “Restart the CIMOM” on page 215 Creating slp.reg file Important: To avoid to register manually the CIMOM outside the subnet every time that the Service Location Protocol (SLP) is restarted, create a file named slp.reg. The default location for the registration is “C:winnt”. slpd reads the slp.reg file on startup and re-reads it when ever the SIGHUP signal is received. slp.reg file example Example 5-1 is a slp.reg file sample. Example 5-1 slp.reg file ############################################################################# # # OpenSLP static registration file # # Format and contents conform to specification in IETF RFC 2614, see also # http://www.openslp.org/doc/html/UsersGuide/SlpReg.html # ############################################################################# #---------------------------------------------------------------------------- # Register Service - SVC CIMOMS #---------------------------------------------------------------------------- service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2 222 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 243. description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------- # Register Service - SANFS CIMOMS #---------------------------------------------------------------------------- #service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------- # Register Service - FAStT CIMOM #---------------------------------------------------------------------------- #service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 5.7.4 Configuring TotalStorage Productivity Center for SLP discovery You can use this panel to enter a list of DA addresses. TotalStorage Productivity Center for Disk sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center for Disk is installed. Configure an SLP DA by changing the configuration of the SLP service agent (SA) that is included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA. Chapter 5. CIMOM install and configuration 223
  • 244. You have now converted the SLP SA of the CIM Agent to run as an SLP DA. The CIMOM is not affected and will register itself with the DA instead of the SA. However, the DA will automatically discover all other services registered with other SLP SAs in that subnet. Attention: You will need to register the IP address of the server running the SLP DA daemon with the IBM Director to facilitate MDM SLP discovery. You can do this using the IBM Director console interface of TotalStorage Productivity Center for Disk. The procedure to register the IP address is described in 6.2, “SLP DA definition” on page 248. 5.7.5 Registering the DS CIM Agent to SLP You need to manually register the DS CIM agent to the SLP DA only when the following conditions are both true: There is no DS CIM Agent in the TotalStorage Productivity Center for Disk server subnet (TotalStorage Productivity Center for Disk). The SLP DA used by Multiple Device Manager is also not running an DS CIM Agent. Tip: If either of the preceding conditions are false, you do not need to perform the following steps. To register the DS CIM Agent issue the following command on the SLP DA server: C:>CD C:Program FilesIBMcimagentslp slptool register service:wbem:https://ipaddress:port Where ipaddress is the ESS CIM Agent ip address. For our ITSO setup, we used IP address of our ESS CIMOM server as 9.1.38.48 and default port number 5989. Issue a verifyconfig command as shown in Figure 5-33 on page 219 to confirm that SLP is aware of the registration. Attention: Whenever you update SLP configuration as shown above, you may have to stop and start the slpd daemon. This will enable SLP to register and listen on newly configured ports. Also, whenever you re-start SLP daemon, ensure that IBM DS CIMOM agent has also re-started. Otherwise you may issue startcimom.bat command, as shown in previous steps. Another alternative is to reboot the CIMOM server. Please note the for DS CIMOM startup takes longer time. 5.7.6 Verifying and managing CIMOM’s availability You may now verify that TotalStorage Productivity Center for Disk can authenticate and discover the CIMOM agent services which are registered to the SLP DA. See “Verifying and managing CIMOMs availability” on page 256. 224 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 245. 5.8 Installing CIM agent for IBM DS4000 family The latest code for the IBM DS4000 family is available at the IBM support Web site. You need to download the correct and supported level of CIMOM code for TotalStorage Productivity Center for Disk Version 2.3. You can navigate from the following IBM support Web site for TotalStorage Productivity Center for Disk to acquire the correct CIMOM code: http://www-1.ibm.com/servers/storage/support/software/tpcdisk/ You may to have traverse through multiple links to get to the download files. At the time of writing this book, we go to the Web page shown in Figure 5-38. Figure 5-38 IBM support matrix Web page Chapter 5. CIMOM install and configuration 225
  • 246. By scrolling down the same Web page, we go to the following link for DS 4000 CIMOM code as shown in Figure 5-39. This link leads to the Engenio Provider site. The current supported code level is 1.0.59, as indicated in the Web page. Figure 5-39 Web download link for DS Family CIMOM code From the Web site, select the operating system used for the server on which the IBM DS family CIM Agent will be installed. You will download a setup.exe file. Save it to a directory on the server on which you will be installing the DS 4000 CIM Agent (see Figure 5-40 on page 227). 226 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 247. Figure 5-40 DS CIMOM Install Launch the setup.exe file to begin the DS 4000 family CIM agent installation. The InstallShield Wizard for LSI SMI-S Provider window opens (see Figure 5-41). Click Next to continue. Figure 5-41 LSI SMI-SProvider window Chapter 5. CIMOM install and configuration 227
  • 248. The LSI License Agreement window opens next. If you agree with the terms of the license agreement, click Yes to accept the terms and continue the installation (see Figure 5-42). Figure 5-42 LSI License Agreement The LSI System Info window opens. The minimum requirements are listed along with the install system disk free space and memory attributes as shown in Figure 5-43. If the install system fails the minimum requirements evaluation, then a notification window will appear and the installation will fail. Click Next to continue. Figure 5-43 System Info window 228 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 249. The Choose Destination Location window appears. Click Browse to choose another location or click Next to begin the installation of the FAStT CIM agent (see Figure 5-44). Figure 5-44 Choose a destination The InstallShield Wizard will now prepare and copy the files into the destination directory. See Figure 5-45. Figure 5-45 Install Preparation window Chapter 5. CIMOM install and configuration 229
  • 250. The README appears after the files have been installed. Read through it to become familiar with the most current information (see Figure 5-46). Click Next when ready to continue. Figure 5-46 README file In the Enter IPs and/or Hostnames window, enter the IP addresses and hostnames of the FAStT devices that this FAStT CIM agent will manage as shown in Figure 5-47. Figure 5-47 FAStT device list 230 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 251. Use the Add New Entry button to add the IP addresses or hostnames of the FAStT devices that this FAStT CIM agent will communicate with. Enter one IP address or hostname at a time until all the FAStT devices have been entered and click Next (see Figure 5-48). Figure 5-48 Enter hostname or IP address Do not enter the IP address of a FAStT device in multiple FAStT CIM Agents within the same subnet. This may cause unpredictable results on the TotalStorage Productivity Center for Disk server and could cause a loss of communication with the FAStT devices. If the list of hostnames or IP addresses has been previously written to a file, use the Add File Contents button, which will open the Windows Explorer. Locate and select the file and then click Open to import the file contents. When all the FAStT device hostnames and IP addresses have been entered, click Next to start the SMI-S Provider Service (see Figure 5-49). Figure 5-49 Provider Service starting Chapter 5. CIMOM install and configuration 231
  • 252. When the Service has started, the installation of the FAStT CIM agent is complete (see Figure 5-50). Figure 5-50 Installation complete Arrayhosts file The installer will create a file called – %installroot%SMI-SProviderwbemservicescimombinarrayhosts.txt The arrayhosts file is shown in Figure 5-51. In this file the IP addresses of installed DS 4000 units can be reviewed, added, or edited. Figure 5-51 Arrayhosts file Verifying LSI Provider Service availability You can verify from Windows Services Panel that the LSI Provider service has started as shown in Figure 5-52 on page 233. If you change the contents of the arrayhost file for adding and deleting DS 4000 devices, then you will need to restart the LSI Provider service using the Windows Services Panel. 232 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 253. Figure 5-52 LSI Provider Service Registering DS4000 CIM agent The DS4000 CIM Agent needs to be registered with an SLP DA if the FAStT CIM Agent is in a different subnet then that of IBM TotalStorage Productivity Center for Disk and Replication Base environment. The registration is not currently provided automatically by the CIM Agent. You register the DS 4000 CIM Agent with SLP DA from a command prompt using the slptool command. An example of the slptool command is shown below. You must change the IP address to reflect the IP address of the workstation or server where you installed the DS 4000 family DS 4000 CIM Agent. The IP address of our FAStT CIM Agent is 9.1.38.79 and port 5988. You need to execute this command on your SLP DA server. It our ITSO lab, we used SLP DA on the ESS CIMOM server. You need to go to directory C:Program FilesIBMcimagentslp and run: slptool register service:wbem:http:9.1.38.79:5988 Important: You cannot have the FAStT management password set if you are using IBM TotalStorage Productivity Center. At this point you may run following command on the SLP DA server to verify that DS 4000 family FAStT CIM agent is registered with SLP DA. slptool findsrvs wbem The response from this command will show the available services which you may verify. 5.8.1 Verifying and Managing CIMOM availability You may now verify that TotalStorage Productivity Center for Disk can authenticate and discover the CIMOM agent services which are registered by SLP DA. You can proceed to your TotalStorage Productivity Center for Disk server. See “Verifying and managing CIMOMs availability” on page 256. Chapter 5. CIMOM install and configuration 233
  • 254. 5.9 Configuring CIMOM for SAN Volume Controller The CIM Agent for SAN Volume Controller is part of the SAN Volume Controller Console and provides the TotalStorage Productivity Center for Disk with access to SAN Volume Controller clusters. You must customize the CIM Agents in your enterprise to accept the TotalStorage Productivity Center for Disk user name and password. Figure 5-53 explains the communication between the TotalStorage Productivity Center for Disk and SAN Volume Controller Environment. Figure 5-53 TotalStorage Productivity Center for Disk and SVC communication For additional details on how to configure the SAN Volume Controller Console, refer to the redbook IBM TotalStorage Introducing the SAN Volume Controller and SAN Integration Server, SG24-6423. To discover and manage the SAN Volume Controller, we need to ensure that our TotalStorage Productivity Center for Disk superuser name and password (the account we specify in the TotalStorage Productivity Center for Disk configuration panel, as shown in 5.9.1, “Adding the SVC TotalStorage Productivity Center for Disk user account” on page 235) matches an account defined on the SAN Volume Controller console. In our case we implemented username TPCSUID and password ITSOSJ. You may want to adapt a similar nomenclature and set up the username and password on each SAN Volume Controller CIMOM to be monitored with TotalStorage Productivity Center for Disk. 234 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 255. 5.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account As stated previously, you should implement a unique userid to manage the SAN Volume Controller devices in TotalStorage Productivity Center for Disk. This can be achieved at the SAN Volume Controller console using the following steps: 1. Login to the SAN Volume Controller console with a superuser account 2. Click Users under My Work on the left side of the panel (see Figure 5-54). Figure 5-54 SAN Volume Controller console Chapter 5. CIMOM install and configuration 235
  • 256. 3. Select Add a user in the drop down under Users panel and click Go (see Figure 5-55). Figure 5-55 SAN Volume Controller console Add a user 236 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 257. 4. An introduction screen is opened, click Next (see Figure 5-56). Figure 5-56 SAN Volume Controller Add a user wizard Chapter 5. CIMOM install and configuration 237
  • 258. 5. Enter the User Name and Password and click Next (see Figure 5-57). Figure 5-57 SAN Volume Controller Console Define users panel 6. Select your candidate cluster and move it to the right under Administrator Clusters (see Figure 5-58). Click Next to continue. Figure 5-58 SAN Volume Controller console Assign administrator roles 238 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 259. 7. Click Next after you Assign service roles (see Figure 5-59). Figure 5-59 SAN Volume Controller Console Assign user roles Chapter 5. CIMOM install and configuration 239
  • 260. 8. Click Finish after you verify user roles (see Figure 5-60). Figure 5-60 SAN Volume Controller Console Verify user roles 9. After you click Finish, the Viewing users panel opens (see Figure 5-61). Figure 5-61 SAN Volume Controller Console Viewing Users 240 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 261. Confirming that the SAN Volume Controller CIMOM is available Before you proceed, you need to be sure that the CIMOM on the SAN Volume Controller is listening for incoming connections. To do this, issue a telnet command from the server where TotalStorage Productivity Center for Disk resides. A successful telnet on port 5989 (as indicated by a black screen with cursor on the top left) will tell you that the CIMOM SAN Volume Controller console is active. If the telnet connection fails, you will have a panel like the one in Figure 5-62. Figure 5-62 Example of telnet fail connection 5.9.2 Registering the SAN Volume Controller host in SLP The next step to detecting an SAN Volume Controller is to manually register the SAN Volume Controller console to the SLP DA. Tip: If your SAN Volume Controller console resides in the same subnet as the TotalStorage Productivity Center server, SLP registration will be automatic so you do not need to perform the following step. To register the SAN Volume Controller Console perform the following command on the SLP DA server: slptool register service:wbem:https://ipaddress:5989 Where ipaddress is the SAN Volume Controller console ip address. Run a verifyconfig command to confirm that SLP ia aware of the SAN Volume Controller console registration. 5.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary The TotalStorage Productivity Center discovers both IBM storage devices that comply with the Storage Management Initiative Specification (SMI-S) and SAN devices such as switches, ports, and hosts. SMIS-compliant storage devices are discovered using the Service Location Protocol (SLP). Chapter 5. CIMOM install and configuration 241
  • 262. The TotalStorage Productivity Center server software performs SLP discovery on the network. The User Agent looks for all registered services with a service type of service:wbem. The TotalStorage Productivity Center performs the following discovery tasks: Locates individual storage devices Retrieves vital characteristics for those storage devices Populates The TotalStorage Productivity Center internal databases with the discovered information The TotalStorage Productivity Center can also access storage devices through the CIM Agent software. Each CIM Agent can control one or more storage devices. After the CIMOM services have been discovered through SLP, the TotalStorage Productivity Center contacts each of the CIMOMs directly to retrieve the list of storage devices controlled by each CIMOM. TotalStorage Productivity Center gathers the vital characteristics of each of these devices. For the TotalStorage Productivity Center to successfully communicate with the CIMOMs, the following conditions must be met: A common user name and password must be configured for all the CIM Agent instances that are associated with storage devices that are discoverable by TotalStorage Productivity Center (use adduser as described in 5.6.5, “CIMOM user authentication” on page 215). That same user name and password must also be configured for TotalStorage Productivity Center using the Configure MDM task in the TotalStorage Productivity Center interface. If a CIMOM is not configured with the matching user name and password, it will be impossible to determine which devices the CIMOM supports. As a result, no devices for that CIMOM will appear in the IBM Director Group Content pane. The CIMOM service must be accessible through the IP network. The TCP/IP network configuration on the host where TotalStorage Productivity Center is installed must include in its list of domain names all the domains that contain storage devices that are discoverable by the TotalStorage Productivity Center. It is important to verify that CIMOM is up and running. To do that, use the following command from TotalStorage Productivity Center server: telnet CIMip port Where: CIMip is the ip address where CIM Agent run and port is the port value used for the communication (5989 for secure connection, 5988 for unsecure connection). 5.10.1 SLP registration and slptool TotalStorage Productivity Center for Disk uses Service Location Protocol (SLP) discovery, which requires that all of the CIMOMs that TotalStorage Productivity Center for Disk discovers are registered using the Service Location Protocol (SLP). SLP can only discover CIMOMs that are registered in its IP subnet. For CIMOMs outside of the IP subnet, you need to use an SLP DA and register the CIMOM using slptool. Ensure that the CIM_InteropSchemaNamespace and Namespace attributes are specified. For example, type the following command: slptool register service:wbem:https://myhost.com:port Where: myhost.com is the name of the server hosting the CIMOM, and port is the port number of the service, such as 5989. 242 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 263. 5.10.2 Persistency of SLP registration Although it is acceptable to register services manually into SLP, it is possible for SLP users to to statically register legacy services (applications that were not compiled to use the SLP library) using a configuration file that SLP reads at startup, called slp.reg. All of the registrations are maintained by slpd and will remain registered as long as slpd is alive. The Service Location Protocol (SLP) registration is lost if the server where SLP resides is rebooted or when the Service Location Protocol (SLP) service is stopped. A Service Location Protocol (SLP) manual registration is needed for all the CIMOMs outside the subnet where SLP DA resides. Important: To avoid to register manually the CIMOM outside the subnet every time that the Service Location Protocol (SLP) is restarted, create a file named slp.reg. The default location for the registration is for Windows machines “c:winnt”, or “/etc” directory for UNIX machines. slpd reads the slp.reg file on startup and re-reads it when ever the SIGHUP signal is received 5.10.3 Configuring slp.reg file Example 5-2 shows a typical slp.reg file: Example 5-2 An slp.reg file ############################################################################# # # OpenSLP static registration file # # Format and contents conform to specification in IETF RFC 2614, see also # http://www.openslp.org/doc/html/UsersGuide/SlpReg.html # ############################################################################# #---------------------------------------------------------------------------- # Register Service - SVC CIMOMS #---------------------------------------------------------------------------- service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 Chapter 5. CIMOM install and configuration 243
  • 264. #---------------------------------------------------------------------------- # Register Service - SANFS CIMOMS #---------------------------------------------------------------------------- #service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------- # Register Service - FAStT CIMOM #---------------------------------------------------------------------------- #service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 244 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 265. Part 3 Part 3 Configuring the IBM TotalStorage Productivity Center In this part of the book we provide information about customizing the components of the IBM TotalStorage Productivity Center product suite, for the following components: IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Fabric IBM TotalStorage Productivity Center for Data We also include a chapter on how to set up the individual (sub) agents on a managed host. © Copyright IBM Corp. 2005. All rights reserved. 245
  • 266. 246 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 267. 6 Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk This chapter provides information about the basic tasks that you need to complete after you install IBM TotalStorage Productivity Center for Disk: Define SLP DA servers to IBM TotalStorage Productivity Center for Disk Discover CIM Agents Configure CIM Agents to IBM TotalStorage Productivity Center for Disk Discover Storage devices Install the remote GUI © Copyright IBM Corp. 2005. All rights reserved. 247
  • 268. 6.1 Productivity Center for Disk Discovery summary Productivity Center for Disk discovers both IBM storage devices that comply with the SMI-S and SAN devices such as switches, ports, and hosts. SMIS-compliant storage devices are discovered using the SLP. The Productivity Center for Disk server software performs SLP discovery in the network. The User Agent looks for all registered services with a service type of service:wbem. Productivity Center for Disk performs the following discovery tasks: Locates individual CIM Agents Locates individual storage devices Retrieves vital characteristics for those storage devices Populates the internal Productivity Center for Disk databases with the discovered information Productivity Center for Disk can also access storage devices through the CIM Agent software. Each CIM Agent can control one or more storage devices. After the CIMOM services are discovered through SLP, Productivity Center for Disk contacts each of the CIMOMs directly to retrieve the list of storage devices controlled by each CIMOM. Productivity Center for Disk gathers the vital characteristics of each of these devices. For Productivity Center for Disk to successfully communicate with the CIMOMs, you must meet the following conditions: A common user name (superuser) and password must be set during installation of the IBM TotalStorage Productivity Center for Disk base. This user name and password can be changed using the Configure MDM task in the Productivity Center for Disk interface. If a CIMOM is not configured with the matching user name and password, then you must configure each CIMOM with the correct userid and password using the panel shown in Figure 6-16 on page 258. We recommend that the common user name and password be used for each CIMOM. The CIMOM service must be accessible through the IP network. The TCP/IP network configuration on the host, where Productivity Center for Disk is installed, must include in its list of domain names all the domains that contain storage devices that are discoverable by Productivity Center for Disk. It is important to verify that CIMOM is up and running. To do that, use the following command: telnet CIMip port Here, CIMip is the IP address where the CIM Agent runs, and port is the port value used for the communication (5989 for a secure connection; 5988 for an unsecure connection). 6.2 SLP DA definition Productivity Center for Disk can discover CIM Agents on the same subnet through SLP without any additional configuration. SLP DA should be set up on each subnet as described in 5.7.3, “Setting up the Service Location Protocol Directory Agent” on page 221. The SLP DA can then be defined to Productivity Center for Disk using the panel located at Options → Discovery Preferences → MDM SLP Configuration as shown in Figure 6-1 on page 249. Enter the IP address of the server with the SLP DA into the SLP directory agent host box and click Add. 248 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 269. We are assuming that you have followed the steps outlined in Chapter 5, “CIMOM install and configuration” on page 191. You should complete the following tasks in order to discover devices defined to our Productivity Center common base host. Make sure that: All CIM agents are running and are registered with the SLP server. The SLP agent host is defined in the IBM Director options (Figure 6-1) if it resides in a different subnet from that of the TotalStorage Productivity Center server (Options → Discovery Preferences → MDM SLP Configuration tab). Note: If the Productivity Center common base host server resides in the same subnet as the CIMOM, then it is not a requirement that the SLP DA host IP address be specified in the Discovery Preferences panel as shown in Figure 6-2. Refer to Chapter 2, “Key concepts” on page 27 for details on SLP discovery. Here we provide a step-by-step procedure: 1. Discovery will happen automatically based on preferences that are defined in the Options → Discovery Preferences → MDM SLP Configuration tab. The default values for Auto discovery interval and Presence check interval is set to 0 (see Figure 6-1). These values should be set to a more suitable value, for example, to 1 hour for Auto discovery interval and 15 minutes for Presence check interval. The values you specify will have a performance impact on the CIMOMs and Productivity Center common base server, so do not set these values too low. Figure 6-1 Setting discovery preferences Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 249
  • 270. Continue entering IP addresses for all SLP DA servers. Click OK when finished (see Figure 6-2). Figure 6-2 Discovery preference set 2. Turn off automatic inventory on discovery. Important: Because of the time and CIMOM resources needed to perform inventory on storage devices, it is undesirable and unnecessary to perform this operation each time Productivity Center common base performs a device discovery. 250 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 271. Turn off automatic inventory by selecting Options → Server Preferences as shown in Figure 6-3. Figure 6-3 Selecting Server Preferences Now uncheck the Collect On Discovery check box as shown in Figure 6-4, all other options can remain unchanged. Select OK when done. Figure 6-4 Server Preferences Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 251
  • 272. 3. You can click Discover all Systems in the top left corner of the IBM Director Console to initiate an immediate discovery task (see Figure 6-5). Figure 6-5 Discover All Systems icon 252 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 273. 4. You can also use the IBM Director Scheduler to create a scheduled job for new device discovery. – Either click the scheduler icon in the IBM Director tool bar or use the menu, Tasks → Scheduler (see Figure 6-6). Figure 6-6 Tasks Scheduler option for Discovery – In the Scheduler, click File → New Job (see Figure 6-7). Figure 6-7 Task Scheduler Discovery job Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 253
  • 274. – Establish parameters for the new job. Under the Date/Time tab, include date and time to perform the job, and whether the job is to be repeated (see Figure 6-8). Figure 6-8 Discover job parameters – From the Task tab (see Figure 6-9), select Discover MDM storage devices/SAN Elements, then click Select. Figure 6-9 Discover job selection task 254 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 275. – Click File → Save as, or use the Save as icon. – Provide a descriptive job name in the Save Job panel (see Figure 6-10) and click OK. Figure 6-10 Discover task job name Now run the discovery process by selecting Tasks → Discover Systems → All Systems and Devices (Figure 6-11). Figure 6-11 Perform discovery Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 255
  • 276. Double-click the Manage CIMOM task to see the status of the discovery (Figure 6-12). Figure 6-12 Configure CIMOMs The CIMOMs will appear in the list as they are discovered. 6.2.1 Verifying and managing CIMOMs availability You may now verify that TotalStorage Productivity Center for Disk can authenticate and discover the CIMOM agent services which are registered to the SLP DA. Launch the IBM Director Console and select TotalStorage Productivity Center for Disk → Manage CIMOMs in the tasks panel as shown in Figure 6-13. The panel shows the status of connection to the respective CIMOM servers. Our ITSO DS CIMOM server connection status is indicated in first line, with IP address 9.1.38.48, port 5996, and status as Success. Figure 6-13 Manage CIMOM panel 256 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 277. It should not be necessary to change any information if you followed the recommendation to use the same superuser id and password for all CIMOMs. Select the CIMOM to be configured and click Properties to configure a CIMOM (Figure 6-14). Figure 6-14 Select a CIMOM to configure 1. In order to verify and re-confirm the connection, you may select the respective connection status and click Properties. Figure 6-16 on page 258 shows the properties panel. You may verify the information and update if necessary. The namespace, username and password are picked up automatically, hence they are not normally required to be entered manually. This username is used by CIMOM to logon to TotalStorage Productivity Center for Disk. If you have difficulty getting a successful connection, then you may manually enter namespace, username, and password. Update the properties panel and test the connection to the CIMOM: a. Enter the Namespace value. It is rootibm for the ESS, DS6000 and DS8000 It is interop for the DS4000. b. Select the protocol. It is typically https for ESS, DS6000 and DS8000. It is http for DS4000. c. Enter the User name and password. The default is the superuser password entered earlier. If you entered a different user name and password with the setuser command for the CIM agent, then enter that user name and password here. d. Click Test Connection to verify correct configuration. e. You should see the panel in Figure 6-15. Click Close on the panel. Figure 6-15 Successful test of the connection to a CIMOM Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 257
  • 278. f. Click OK on the panel shown in Figure 6-16 to save the properties. Figure 6-16 CIMOM Properties 2. After the connection to the CIMOM is successful, then perform discovery again as shown before in Figure 6-11 on page 255. This will discover the storage devices connected to each CIMOM (Figure 6-17). Figure 6-17 DS4000 CIMOM Properties Panel 3. Click the Test Connection button to see a panel similar to Figure 6-15 on page 257, showing that the connection is successful. Tip: If you move or delete CIMOMs in your environment, the old CIMOM entries are not automatically updated, and entries with a Failure status will be seen as in Figure 6-13 on page 256. These invalid entries can slow down discovery performance, as TotalStorage Productivity Center tries to contact them each time it performs a discovery. You cannot delete CIMOM entries directly from the Productivity Center common base interface. Delete them using the DB2 control center tool as described in 16.6, “Manually removing old CIMOM entries” on page 911. 258 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 279. 6.3 Disk and Replication Manager remote GUI It is possible to install a TotalStorage Productivity Center for Disk console on a server other than the one on which the TotalStorage Productivity Center for Disk code is installed. This allows you to manage TotalStorage Productivity Center for Disk from a secondary location. Having a secondary TotalStorage Productivity Center for Disk console will offload workload from the TotalStorage Productivity Center for Disk server. Note: You are only installing the IBM Director and TotalStorage Productivity Center for Disk console code. You do not need to install any other code for the remote console. In our lab we installed the remote console on a dedicated Windows 2000 server with 2 GB RAM. You must install all the consoles and clients on the same server. Here are the steps: 1. Install the IBM Director console. 2. Install the TotalStorage Productivity Center for Disk console. 3. Install the Performance Manager client if the Performance Manager component is installed. Installing the IBM Director console Follow these steps: 1. Start the setup.exe of IBM Director. 2. The main IBM Director window (Figure 6-18) opens. Click INSTALL IBM DIRECTOR. Figure 6-18 IBM Director installer Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 259
  • 280. 3. In the IBM Director Installation panel (Figure 6-19), select IBM Director Console installation. Figure 6-19 Installation options for IBM Director 4. After a moment, the InstallShield Wizard for IBM Director Console panel (Figure 6-20) opens. Click Next. Figure 6-20 Welcome panel 260 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 281. 5. In the License Agreement panel (Figure 6-21), select I accept the terms in the license agreement. Then click Next. Figure 6-21 License Agreement 6. The next panel (Figure 6-22) contains information about enhancing IBM Director. Click Next to continue. Figure 6-22 Enhance IBM Director Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 261
  • 282. 7. The Feature and installation directory selection panel (Figure 6-23) allows you to change how a program feature is installed. Click Next. Figure 6-23 Selecting the program features to install 8. In the Ready to Install the Program window (Figure 6-24), accept the default selection. Then click Install to start the installation. Figure 6-24 Ready to Install the Program panel 262 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 283. 9. The installation takes a few minutes. When it is finished, you see the InstallShield Wizard Completed window (Figure 6-25). Click Finish to complete the installation. Figure 6-25 Installation finished The remote console of IBM Director is now installed. Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 263
  • 284. Installing the remote console for Productivity Center for Disk To install the remote console for Productivity Center for Disk follow these steps: 1. Insert the installation media for Productivity Center for Disk and Replication Base. 2. Change to the W2K directory. Figure 6-26 shows the files in that directory. Figure 6-26 Files in the W2K directory 3. Start the LaunchPad.bat batch file. Coincidently this file has the same name as the TotalStorage Productivity Center Launchpad, although it has nothing to do with it. 4. Click Installation wizard to begin the installation (Figure 6-27 on page 265). 264 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 285. Figure 6-27 Multiple Device Manager LaunchPad 5. For a brief moment, you see a DOS box with the installer being unpacked. When this is done, you see the Welcome window shown in Figure 6-28. Click Next. Figure 6-28 Welcome window Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 265
  • 286. 6. The License Agreement window (Figure 6-29) is displayed. Select I accept the term in the license agreement and click Next. Figure 6-29 License Agreement 266 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 287. 7. The Destination Directory window (Figure 6-30) opens. Accept the default path or enter the target directory for the installation. Click Next. Figure 6-30 Installation directory Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 267
  • 288. 8. In the Select Product Type window (Figure 6-31), select Productivity Center for Disk and Replication Base Console for the product type. Click Next. Figure 6-31 Installation options 268 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 289. 9. The Preview window (Figure 6-32) contains the installation information. Review it and click Install to start the console install. Figure 6-32 Summary 10.When you reach the Finish window, click Finish to exit the add-on installer (Figure 6-33). Figure 6-33 Installation finished Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 269
  • 290. 11.You return to the IBM TotalStorage Productivity Center for Disk and Replication Base installer window shown in Figure 6-27 on page 265. Click Exit to end the installation. The IBM Director remote console is now installed. The add-ons for IBM TotalStorage Productivity Center for Disk and Replication Base have been added. If the TotalStorage Productivity Center Launchpad is installed, it detects that the IBM Director remote console is available the next time the LaunchPad is started. Also the Launchpad can now be used to start IBM Director. 6.3.1 Installing Remote Console for Performance Manager function After installing IBM Director Console and TotalStorage Productivity Center for Disk base console, you will need to install remote console for Performance Manager function. For this, insert the CD-ROM which contains the code for TotalStorage Productivity Center for Disk and click setup.exe. In our example, we used the downloaded code as shown in the screenshot in Figure 6-34. Figure 6-34 Screenshot of our lab download directory location 270 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 291. Next, you will see Welcome panel shown in Figure 6-35.Click Next. Figure 6-35 Welcome panel from TotalStorage Productivity Center for Disk installer The License Agreement panel shown in Figure 6-36 on page 272 appears. Select I accept the terms in the license agreement and click Next to continue. Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 271
  • 292. Figure 6-36 Accept the terms of license agreement. Choose the default destination directory as shown in Figure 6-37 and click Next. Figure 6-37 Choose default destination directory 272 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 293. In the next panel, choose to install Productivity Center for Disk Client and click Next as shown in Figure 6-38. Figure 6-38 Select Product Type Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 273
  • 294. In the next panel, select both check boxes for products, if you would like to install the console and command line client for the Performance Manager function (see Figure 6-39). Click Next. Figure 6-39 TotalStorage Productivity Center for Disk features selection 274 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 295. The Productivity Center for Disk Installer - CoServer Parameters panel opens (see Figure 6-40). Enter the TPC user ID and password and the IP that the remote console will use to validate with the TPC server. This is the IP of the TPC server and IBM Director logon. Figure 6-40 Productivity Center for Disk Installer - CoServer parameters The Productivity Center for Disk Installer - Preview panel appears (see Figure 6-41 on page 276). Review the information and click Install to start the process of installing the remote console. Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 275
  • 296. Figure 6-41 Productivity Center for Disk Installer - Preview When the install is complete you will see the Productivity Center for Disk Installer - Finish panel as shown in Figure 6-42. Click Finish to complete the install process. Figure 6-42 TotalStorage Productivity Center for Disk finish panel 276 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 297. 6.3.2 Launching Remote Console for TotalStorage Productivity Center You can launch the remote console from the TotalStorage Productivity Center desktop icon from the remote console server. You will see the window in Figure 6-43. Figure 6-43 TotalStorage Productivity Center launch window You may click Manage Disk Performance and Replication as highlighted in the figure. This will launch IBM director remote console. You may logon to director server and start using remote console functions except for Replication Manager. Note: At this point, you have installed the remote console for Performance Manager function only and not for replication manager. You can install remote console for replication manager if you wish. Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk 277
  • 298. 278 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 299. 7 Chapter 7. Configuring TotalStorage Productivity Center for Replication This chapter provides information to help you customize the TotalStorage Productivity Center for Replication component of the TotalStorage Productivity Center. In particular, we describe how to set up a remote GUI and CLI. © Copyright IBM Corp. 2005. All rights reserved. 279
  • 300. 7.1 Installing a remote GUI and CLI A replication session can be managed remotely using the graphical user interface (GUI) and command line interface (CLI). To install, follow the procedure below. 1. Copy the suite install and Replication manager code to the computer you wish to use. 2. In the suite install folder, double click on the setup.exe file to launch the installer wizard. 3. At the language panel (Figure 7-1), choose the language you wish to use during the install. Figure 7-1 Select a language 4. At the welcome screen (Figure 7-2), click Next. Figure 7-2 Welcome screen 280 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 301. 5. The software license agreement panel appears (Figure 7-3). Click the radio button next to I accept the terms of the license agreement and click Next to continue. Figure 7-3 License agreement 6. In the TotalStorage Productivity Center install options panel (Figure 7-4), click the radio button next to User interface installations of Data, Disk, Fabric, and Replication and click Next. Figure 7-4 TotalStorage Productivity Center install options Chapter 7. Configuring TotalStorage Productivity Center for Replication 281
  • 302. 7. In the Remote GUI/Command LIne Client component window (Figure 7-5), check the box by The Productivity Center for Replication - Command Line Client and click Next. Figure 7-5 Select Remote GUI/Command Line Client 8. A window opens (Figure 7-6) to begin the replication command line client install. Figure 7-6 Replication command client install 282 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 303. 9. In the next window, enter the location of the Replication Manager install package (Figure 7-7). Figure 7-7 Install package location for replication 10.A window opens prompting you to interact with the Replication Manager install wizard (Figure 7-8). Figure 7-8 Launch Replication Manager installer Chapter 7. Configuring TotalStorage Productivity Center for Replication 283
  • 304. 11.The window in Figure 7-9 appears until the install wizard is launched. Figure 7-9 Launching installer 12.The Productivity Center for Replication Installer - Welcome wizard window (Figure 7-10) opens. Click Next. Figure 7-10 Replication remote CLI install wizard 284 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 305. 13.Specify the directory path of the Replication Manager installation files in the window shown in Figure 7-11. Click Next. Figure 7-11 Replication remote CLI Installer - destination directory 14.In the CoServer Parameters window shown in Figure 7-12, enter the following information: – Host Name: Host name or IP address of the Replication Manager server – Host Port: Port number of the Replication Manager server (default value is 9443) – User Name: User name of the CIM Agent managing the storage device(s) – User Password: User password of the CIM Agent managing the storage device(s) Click Next to continue. Chapter 7. Configuring TotalStorage Productivity Center for Replication 285
  • 306. Figure 7-12 Replication remote CLI Installer - coserver parameters 15.Review the information in the Preview window shown in Figure 7-13 and click Install. Figure 7-13 Replication remote CLI Installer - preview 286 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 307. 16.After successfully installing the remote CLI, the window in Figure 7-14 appears. Click Finish. Figure 7-14 Replication remote CLI Installer - finish 17.After clicking Finish, the postinstall.txt file opens.You may read the file or close and view it at a later time. 18.A window opens informing you of a successful installation (see Figure 7-15). Click Next to finish. Figure 7-15 Remote CLI installation successful Chapter 7. Configuring TotalStorage Productivity Center for Replication 287
  • 308. 288 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 309. 8 Chapter 8. Configuring IBM TotalStorage Productivity Center for Data This chapter describes the necessary tasks to start using IBM TotalStorage Productivity Center for Data in your environment. After you install Productivity Center for Data, there are a few remaining steps to perform, but you can start to use it without performing these steps at first. Most people use Productivity Center for Data to look at the environment and see how the storage capacity is distributed. This chapter focuses on what is necessary to fulfill this task. The following procedures are covered in this chapter: Configuring a discovered IBM TotalStorage Enterprise Storage Server (ESS) Common Information Model (CIM) Agent Configuring a discovered Fibre Array Storage Technology (FAStT) CIM Agent Adding a CIM Agent that is located in a remote network Setting up the IBM TotalStorage Productivity Center for Data Web interface Setting up a remote console We also recommend that you perform the following actions, although we do not describe them here: Setting up the alerting dispositions: Simple Network Management Protocol (SNMP), Tivoli Enterprise Console (TEC), and mail Setting up retention of log files and other information © Copyright IBM Corp. 2005. All rights reserved. 289
  • 310. 8.1 Configuring the CIM Agents Configuration of the CIM Agents for IBM TotalStorage Productivity Center for Data is different than the configuration you have to perform within Productivity Center for Disk. This section explains how to set up the CIM Agents in two ways: if it was discovered by the Data Manager, or if the CIM Agent is located in a different subnet and multicasts are not enabled. Here is an overview of the procedure to work with CIM Agents: 1. Perform discovery of a new CIM Agent (using Service Location Protocols (SLP)). 2. Configure the discovered CIM Agent properties or definition of a new CIM Agent. 3. Discovery collects the device. 4. After the characteristics are available, set up the device for monitoring. 5. A probe on the device gathers information about the disks and logical unit numbers (LUNs). 8.1.1 CIM and SLP interfaces within Data Manager The CIM interface within Data Manager is used only to gather information about the disks, the LUNs, and some asset information. The data is correlated with the data that the manager receives from the agents. Since there is no way to install the agent of Data Manager directly on a storage subsystem, Data Manager obtains the information from storage subsystems by using the Storage Management Initiative - Specification (SMI-S) standard. This standard uses another standard, CIM. Data Manager uses this interface to access a storage subsystem. A CIM Agent (also called CIM Object Manager (OM)) that ideally runs within the subsystem, but can also run on a separate host, announces its existence by using the SLP. You can learn more about this protocol in 2.3, “Service Location Protocol (SLP) overview” on page 38. Within Data Manager, an SLP User Agent (UA) is integrated, and that agents performs a discovery of devices. This discovery is limited to the local subnet of the Data Manager, and is expanded only if multicasts are enabled on the network routers. See “Multicast” on page 43 for details. Unlike Productivity Center for Disk, the User Agent that is integrated within the Data Manager cannot talk to an SLP Directory Agent (DA). This restriction requires you to manually configure every storage subsystem that was not automatically discovered. 8.1.2 Configuring CIM Agents The procedure to configure a CIM Agent is simple. If a CIM Agent was discovered, you simply enter the security information. We use the term CIM Agent instead of CIMOM because this is a more generic term. Figure 8-1 on page 291 shows the panel where the CIM Agents are configured. In our example, the first two entries show CIM Agents that were discovered but are not yet configured. The last two entries show an ESS and a FAStT CIM Agent that have already been configured. 290 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 311. If you want to configure a CIM Agent that cannot be discovered because of the restriction explained in 8.1.1, “CIM and SLP interfaces within Data Manager” on page 290, then you also need to enter the IP address and select the right protocol. Figure 8-1 Selecting CIM Agent Logins If you completed the worksheets (see Appendix A, “Worksheets” on page 991), have them available for the next steps. Configuring discovered CIM Agents For discovered CIM Agents that are not configured, complete these steps: 1. In the CIM/OM Login Administration panel (Figure 8-1), highlight the discovered CIM Agent. Click Edit. 2. The Edit CIM/OM Login Properties window (Figure 8-2 on page 292) opens. Proceed as follows: a. Verify the IP address, port, and protocol. Note: Not all CIM Agents provide a secure communication via https. For example, FAStT does not provide https, so you have to select http. b. Enter the name and password for the user which was configured in the CIM Agent of that device. Note: At the time of this writing, a FAStT CIM Agent does not use a special user to secure the access. Data Manager still requires an input in the user and password field, so type anything you want. c. If you selected https as the protocol to use, enter the complete path and file name of the certificate file that is used to secure the communication between the CIM Agent and the Data Manager. Note: The Truststore file of the ESS CIM Agent is located in the C:Program Filesibmcimagent directory on the CIM Agent Host. Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 291
  • 312. d. Click Save to finish the configuration. Figure 8-2 CIM Agent login properties Configuring new CIM Agents If you have to enter a CIM Agent manually, click New in the CIM/OM Login Administration panel (Figure 8-1 on page 291). The New CIM/OM Login Properties window (Figure 8-3) opens. You perform the same steps as described in “Configuring discovered CIM Agents” on page 291. For a new CIM Agent, you must also specify the IP address and protocol to use. The port is set depending on the protocol. 292 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 313. Figure 8-3 New CIM Agent login properties Next steps After you configure the CIM Agent properties, run discovery on the storage subsystems. During this process, the Data Manager talks to the CIM Agent to gather information about the devices. When this is completed, you see an entry for the subsystem in the Storage Subsystem Administration panel (Figure 8-5 on page 294). 8.1.3 Setting up a disk alias Optionally, you can change the name of a disk subsystem to a more meaningful name: 1. In the Data Manager GUI, in the Navigation Tree, expand Administrative Services → Configuration →Data Manager subtree as shown in Figure 8-4. Select Storage Subsystem Administration. Figure 8-4 Navigation Tree Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 293
  • 314. 2. The panel shown in Figure 8-5 on page 294 opens. a. Highlight the subsystem. b. Place a check mark in the Monitored column. Note: Select the Monitored column if you want Data Manager to probe the subsystem whenever a probe job is run against it. If you deselect the Monitored check box for a storage subsystem, the following actions occur: All the data gathered by the server for the storage subsystem is removed from the enterprise repository. You can no longer run Monitoring, Alerting, or Policy Management jobs against the storage subsystem. c. Click Set disk alias. Figure 8-5 Storage Subsystem Administration 3. The Set Disk Alias window (Figure 8-6) opens. a. Enter the Alias/Name. b. Click OK to finish. Figure 8-6 Set Disk Alias 4. You may need to refresh the GUI for the changes to become effective. Right-click an old entry in the Navigation Tree, and select Refresh. Next steps Now that you have set up the CIM Agent properties and specified to monitor the subsystems, run a probe against it to collect data about the disks and LUNs. After you do this, you can look at the results in different reports. 294 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 315. 8.2 Setting up the Web GUI The Web GUI is basically the same as the remote GUI that you can install on any machine. You simply use a Web browser to download a Java™ application that is then launched. We show only the basic setup of the Web server, which may not be very secure. The objective here is to gain access to the Data Manager from a machine that does not have the remote GUI installed. Attention: We had the Tivoli Agent Manager running on the same machine. The Agent Manager comes with an application (the Agent Recovery Service) that uses port 80, so we had to find an unused port on the same machine. In addition, you must be careful if you use the Internet Information Server (IIS). IIS uses several ports by default which may interfere with the installed WebSphere Application Server. Therefore we recommend that you use the IBM HTTP Server. 8.2.1 Using IBM HTTP Server This section explains how to set up the IBM HTTP Server to make the remote GUI available via the Web. When you install WebSphere Application Server on a machine, the IBM HTTP Server is installed on the same machine. The IBM HTTP server does not come with a GUI for the administration. Instead you use configuration files to modify any settings. The HTTP server in installed in C:Program FilesWebSphereAppServerHTTPServer. This directory contains the conf subdirectory, which contains the httpd.conf file, which is used to configure the server. 1. In C:Program FilesWebSphereAppServerHTTPServerconf directory, open the httpd.conf file. 2. Locate the line where the port is defined. See Example 8-1. Change the port number. In our example, we used 2077. Example 8-1 Abstracts of the httpd.conf file ServerName GALLIUM # This is the main server configuration file. See URL http://www.apache.org/ # for instructions. # Do NOT simply read the instructions in here without understanding # what they do, if you are unsure consult the online docs. You have been # warned. # Originally by Rob McCool # Note: Where filenames are specified, you must use forward slashes # instead of backslashes. e.g. "c:/apache" instead of "c:apache". If # the drive letter is omitted, the drive where Apache.exe is located # will be assumed .... # Port: The port the standalone listens to. #Port 80 Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 295
  • 316. Port 2077 3. Locate the line AfpaEnable. Comment out the three Afpa... lines as shown in Example 8-2. Example 8-2 Afpa #AfpaEnable #AfpaCache on #AfpaLogFile "C:Program FilesWebSphereAppServerHTTPServer/logs/afpalog" V-ECLF 296 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 317. 4. Locate the line that starts with <Directory. Modify the line to point the directory to C:Program FilesIBMTPCDatagui as shown in Example 8-3. Example 8-3 Directory setting # --------------------------------------------------------------------------- # This section defines server settings which affect which types of services # are allowed, and in what circumstances. # Each directory to which Apache has access, can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # Note: Where filenames are specified, you must use forward slashes # instead of backslashes. e.g. "c:/apache" instead of "c:apache". If # the drive letter is omitted, the drive where Apache.exe is located # will be assumed # First, we configure the "default" to be a very restrictive set of # permissions. # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # This should be changed to whatever you set DocumentRoot to. #<Directory "C:Program FilesWebSphereAppServerHTTPServer/htdocs/en_US"> <Directory "C:Program FilesIBMTPCDatagui"> 5. Locate the line that starts with DocumentRoot. Modify the line to point the directory to C:Program FilesIBMTPCDatagui as shown in Example 8-4. Example 8-4 DocumentRoot # -------------------------------------------------------------------------------- # In the following section, you define the name space that users see of your # http server. This also defines server settings which affect how requests are # serviced, and how results should be formatted. # See the tutorials at http://www.apache.org/ for # more information. # Note: Where filenames are specified, you must use forward slashes # instead of backslashes. e.g. "c:/apache" instead of "c:apache". If # the drive letter is omitted, the drive where Apache.exe is located # will be assumed. # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. #DocumentRoot "C:Program FilesWebSphereAppServerHTTPServer/htdocs/en_US" DocumentRoot "C:Program FilesIBMTPCDatagui" Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 297
  • 318. 6. Locate the line that starts with DirectoryIndex. Modify the line to use TPCD.html as the index document as shown in Example 8-4 on page 297. Example 8-5 Directory index # DirectoryIndex: Name of the file or files to use as a pre-written HTML # directory index. Separate multiple entries with spaces. #DirectoryIndex index.html DirectoryIndex tpcd.html 7. Save the file. 8. Start the HTTP server. 9. Open a command prompt. a. Change to the directory C:Program FilesWebSphereAppServerHTTPServer. b. Type apache, and press Enter. 10.This starts the HTTP server as a foreground application. Now when you use a Web browser, simply enter: http://servername:portumber In our environment, we entered: http://gallium:2077 You see a Web page, and a Java application is then loaded. (Java is installed if necessary.) Note: Do not omit the http://. Since we do not use the default, you have to tell the browser which protocol to use. 298 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 319. 8.2.2 Using Internet Information Server If you have IIS installed on the server running Data Manager, use these steps to enable the access to the remote GUI via a Web site. Attention: If you have WebSphere Application Server running on the same server, be careful not to create port conflicts, especially since port 80 is in use by both applications. 1. Start the Internet Information Services administration GUI. 2. A window opens as shown in Figure 8-7. In the left panel, right-click the entry with your host name and select New → Web Site to launch the Web Site Creation Wizard. Figure 8-7 Internet Information Server administration GUI Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 299
  • 320. 3. The Web Site Creation Wizard opens, displaying the Welcome panel (see Figure 8-8). Click Next. Figure 8-8 Web Site Creation Wizard 4. The Web Site Description panel (Figure 8-9) opens. Enter a description in the panel and click Next. Figure 8-9 Web Site Description panel 300 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 321. 5. The IP Address and Port Settings panel (Figure 8-10) opens. Enter an unused port number and click Next. Figure 8-10 IP Address and Port Setting panel 6. In the Web Site Home Directory panel (Figure 8-11), enter the home directory of the Web server. This is the directory where the files for the remote Web GUI are stored. The default is C:Program FilesIBMTPCDatagui. Click Next. Figure 8-11 Web Site Home Directory panel Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 301
  • 322. 7. The Web Site Access Permissions panel (Figure 8-12) opens. Accept the default access permissions, and click Next. Figure 8-12 Web Site Access Permissions panel 8. When you see the window indicating that you have successfully completed the Web Site Creation Wizard (Figure 8-13), click Finish. Figure 8-13 Setup finished 9. In the Internet Information Services window (Figure 8-7 on page 299), right-click the new Web server entry, and select Properties. 302 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 323. 10.The Data Manager Properties window (Figure 8-14) opens. a. Select the Documents tab. Figure 8-14 Adding a default document b. Click Add. c. In the window that opens, enter tpcd.html. Click OK. d. Click OK to close the Properties window. 11.This starts the HTTP Server as a foreground application. Now, when you use a Web browser, simply enter: http://servername:portumber In our installation, we entered: http://gallium:2077 You see a Web page and a Java application is loaded. (Java is installed if necessary.) Note: Do not omit the http://. Since we don’t use the default, you have to tell the browser which protocol to use. 8.2.3 Configuring the URL in Fabric Manager The user properties file of the Fabric Manager contains settings that control polling, SNMP traps destination, and the fully qualified host name of Data Manager. As an administrator, you can use srmcp manager service commands to display and set the values in the user properties file. The srmcp ConfigService set command sets the value of the specified property to a new value in the user properties file (user.properties). This command can be run only on the manager computer. Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 303
  • 324. Issuing a command on Windows Use these steps to enter a command using Windows. 1. Open a command prompt window. 2. Change the directory to installation directorymanagerbinw32-ix86. The default installation directory is C:Program FilesIBMTPCFabricmanagerbinw32-ix86. 3. Enter the following command: setenv 4. Enter the following command: srmcp -u Administrator -p password ConfigService set SRMURL http://data.itso.ibm.com:2077 The change is picked up immediately. There is no need to restart Fabric Manager. 8.3 Installing the Data Manager remote console To install the remote console for Productivity Center for Data, use the procedure explained in this section. You can also start the installation using the Suite Installer. However, when the Data Manager installer is launched, you begin with the first step of the procedure that follows. 1. Select language. We selected English (Figure 8-16 on page 305). Figure 8-15 Welcome panel 2. The next panel (see Figure 8-16 on page 305) is the Software License Agreement. Click I accept the terms in the license agreement and click Next to continue. 304 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 325. Figure 8-16 License agreement 3. The next panel allows you to select the components to be installed. For the remote console installation, select the User interface installations of Data, Disk, Fabric, and Replication (see Figure 8-17). Click Next to continue. Figure 8-17 Product selection Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 305
  • 326. 4. The next panel allows you to select which Remote GUI will be installed. Select the Productivity Center for Data (see Figure 8-18) and click Next to continue. Figure 8-18 Remote GUI selection panel 5. The next panel is informational (see Figure 8-19) and verifies that the Productivity Center for Data GUI will be installed. Figure 8-19 Verification panel 306 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 327. 6. The install package location panel is displayed. Specify the required information (see Figure 8-20) and click Next to continue. Figure 8-20 Install package location 7. Another information panel is displayed (see Figure 8-21) indicating that the product installer will be launched. Click Next to continue. Figure 8-21 The installer will be launched Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 307
  • 328. 8. .In the window that opens, like the one in Figure 8-22, select Install Productivity Center for Data and click Next. Figure 8-22 Installation action 9. The License Agreement panel (Figure 8-23) opens. Select I have read and AGREE to abide by the license agreement above and click Next. Figure 8-23 License Agreement 308 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 329. 10.A License Agreement Confirmation window (Figure 8-24) opens. Click Yes to confirm. Figure 8-24 License Agreement Confirmation 11.The next window that opens prompts you to specify what you want to install (see Figure 8-25). In this example, we already had the agent installed on our machine. Therefore all options are still available. Select The GUI for reporting and click Next. Figure 8-25 Installation options Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 309
  • 330. 12.In the Productivity Center for Data Parameters panel (Figure 8-26), enter the Data Manager connection details and a Data Manager server name. Change the port if necessary and click Next. Figure 8-26 Data Manager connection details 310 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 331. 13.In the Space Requirements panel (Figure 8-27), you can change the installation directory or leave the default. Click Next. Figure 8-27 Installation directory 14.If the directory does not exist, you see the message shown in Figure 8-28. Click OK to continue or Cancel to change the directory. Figure 8-28 Directory does not exist Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 311
  • 332. 15.You see the window shown in Figure 8-29 indicating that Productivity Center for Data has verified your entries and is ready to start the installation. Click Next to start the installation. Figure 8-29 Ready to start the installation 312 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 333. 16.During the installation, you see a progress indicator. When the installation is finished, you see the Install Progress panel (Figure 8-30). Click Done to exit the installer. Figure 8-30 Installation completed The IBM TotalStorage Productivity Center for Data remote console is now installed. If the TotalStorage Productivity Center Launchpad is installed, it detects that Productivity Center for Data remote console is available the next time the LaunchPad is started. The LaunchPad can now be used to start Productivity Center for Data. 8.4 Configuring Data Manager for Databases Complete the following steps before attempting to monitor your databases with Data Manager. 1. Go to Administrative Services → Configuration → General → License Keys and double-click IBM TPC for Data - Databases (Figure 8-31 on page 314). Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 313
  • 334. Figure 8-31 TPC for Data - Databases License Keys 2. From the list of agents, select those you wish to monitor by checking the box under Licensed (Figure 8-32). After checking the desired boxes, click the RDBMS Logins tab. Figure 8-32 TPC for Data - Databases Licensing tab 314 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 335. 3. To successfully scan a database, you must provide a login name and password for each instance. Click Add New... (Figure 8-33). Figure 8-33 RDBMS Logins 4. In the RDBMS Login Editor window, enter the required information: – Database - the database type you wish to monitor – Agent Host - the host you wish to monitor – Instance - the name of the instance – User - login ID for the instance – Password - password for the instance – Port - port where database is listening Figure 8-34 RDBMS Login Editor Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 315
  • 336. 5. After the database is successfully registered, click OK (Figure 8-35). Figure 8-35 RDBMS successfully registered 8.5 Alert Disposition This section describes the available alerting options one can configure. This option defines how the Alerts are generated when a corresponding event is discovered. This panel is shown in Figure 8-36 by going to Administrative Services → Configuration → General → Alert Disposition. Figure 8-36 Alert Disposition panel 316 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 337. You can specify these parameters: SNMP – Community - The name of the SNMP community for sending traps – Host - The system (event manager) which will receive the traps – Port - The port on which traps will be sent (the standard port is 162) TEC (Tivoli Enterprise Console) – TEC Server - for sending traps to; the system (TEC) that will receive the traps – TEC Port - to which traps will be sent (the standard port is 5529) E-mail – Mail Server - The mail server which will be used for sending the e-mail. – Mail Port - The port used for sending the mail to the mail server. – Default Domain - Default domain to be used for sending the e-mail. – Return To - The return address for undeliverable e-mail. – Reply To - The address to use when will replying to an Alert-triggered e-mail. Alert Log Disposition – Delete Alert Log Records - older than how long the Alert Log files will be kept. Chapter 8. Configuring IBM TotalStorage Productivity Center for Data 317
  • 338. 318 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 339. 9 Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric This chapter explains the steps that you must follow, after you install IBM TotalStorage Productivity Center for Fabric from the CD, to configure the environment. Refer to 4.3.6, “IBM TotalStorage Productivity Center for Fabric” on page 157, which shows the installation procedure for installing IBM TotalStorage Productivity Center for Fabric using the Suite Installer. IBM TotalStorage Productivity Center for Fabric is a rebranding of IBM Tivoli Storage Area Network Manager. Since the configuration process has not changed, the information provided is still applicable. This IBM Redbook complements the IBM Redbook IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848. You may also want to refer to that redbook to learn about design or deployment considerations, which are not covered in this redbook. © Copyright IBM Corp. 2005. All rights reserved. 319
  • 340. 9.1 TotalStorage Productivity Center component interaction This section discusses the interaction between IBM TotalStorage Productivity Center for Fabric and the other IBM TotalStorage Productivity Center components. IBM TotalStorage Productivity Center interaction includes external products and devices. IBM TotalStorage Productivity Center for Fabric uses standard calls to devices to provide and gather information to enable it to provide information about your environment. 9.1.1 IBM TotalStorage Productivity Center for Disk and Replication Base When a supported storage area network (SAN) Manager is installed and configured, IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication leverages the SAN Manager to provide enhanced function. Along with basic device configuration functions, such as logical unit number (LUN) creation, allocation, assignment, and deletion for single and multiple devices, basic SAN management functions such as LUN discovery, allocation, and zoning are provided in one step. In Version 2.1 of TotalStorage Productivity Center, IBM TotalStorage Productivity Center for Fabric is the supported SAN Manager. The set of SAN Manager functions that are exploited are: The ability to retrieve SAN topology information, including switches, hosts, ports, and storage devices. The ability to retrieve and to modify the zoning configuration on the SAN. The ability to register for event notification — this ensures that IBM TotalStorage Productivity Center for Disk is aware when the topology or zoning changes as new devices are discovered by the SAN Manager, and when host LUN configurations change. 9.1.2 SNMP IBM TotalStorage Productivity Center for Fabric acts as a Simple Network Management Protocol (SNMP) manager to receive traps from managed devices in the event of status changes or updates. These traps are used to manage all the devices that the Productivity Center for Fabric is monitoring to provide the status window shown by NetView. These traps should then be passed onto a product, such as Tivoli Event Console (TEC), for central monitoring and management of multiple devices and products within your environment. When using the IBM TotalStorage Productivity Center Suite Installer, the SNMP configuration is performed for you. If you install IBM TotalStorage Productivity Center for Fabric manually, then you need to configure SNMP. The NetView code that is provided when you install IBM TotalStorage Productivity Center for Fabric is to be used only for this product. If you configure this NetView as your SNMP listening device for non-IBM TotalStorage Productivity Center for Fabric purposes, then you need to purchase the relevant NetView license. 9.1.3 Tivoli Provisioning Manager Tivoli Provisioning Manager uses IBM TotalStorage Productivity Center for Fabric when it performs its data resource provisioning. Provisioning is the use of workflows to provide resources (data or server) whenever workloads exceed specified thresholds and dictate that a resource change is necessary to continue to satisfy service-level agreements or business objectives. If the new resources are data resources which are part of the SAN fabric, then IBM TotalStorage Productivity Center for Fabric is invoked to provide LUN allocation, path definition, or zoning changes as necessary. 320 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 341. Refer to the IBM Redbook Exploring Storage Management Efficiencies and Provisioning: Understanding IBM TotalStorage Productivity Center and IBM TotalStorage Productivity Center with Advanced Provisioning, SG24-6373, which presents an overview of the product components and functions. It explains the architecture and shows the use of storage provisioning workflows. 9.2 Post-installation procedures This section discusses the next steps that we performed after the initial product installation from the CD, to take advantage of the function IBM TotalStorage Productivity Center for Fabric provides. After you install the Fabric Manager, you need to decide on which machines you will install the Agent and on which machines you will install the Remote Console. The following sections show how to install these components. 9.2.1 Installing Productivity Center for Fabric – Agent This section explains how to install the Productivity Center for Fabric – Agent. The installation must be performed by someone who has a user ID with administrator rights (Windows) or root authority (UNIX). We used the Suite Installer to install the Agent. You can also install directly from the appropriate subdirectory on the CD. Because the installation is Java based, it looks the same on all platforms. 1. In the window that opens, select the language for installation. We chose English. Click Next. You will see the Welcome screen (Figure 9-1). Click Next to continue. Figure 9-1 Welcome screen Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 321
  • 342. 2. The next screen (see Figure 9-2) is the Software License Agreement. Click I accept the terms in the license agreement and click Next to continue. Figure 9-2 License Agreement 3. In the Suite Installer panel (Figure 9-3), select the Agent installations of Data, Fabric, and CIM Agent option. Then click Next. Figure 9-3 Suite Installer panel for selecting Agent installations 322 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 343. 4. In the next window, select one or more Agent components (see Figure 9-4). In this example, we chose The Productivity Center for Fabric - Agent option. Click Next. Figure 9-4 Agent type selection panel 5. In the next panel, confirm the components to install. See Figure 9-5. Click Next. Figure 9-5 Productivity Center for Fabric - Agent confirmation Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 323
  • 344. 6. As shown in Figure 9-6, enter the install package location. In our case, we installed from the I: drive. You will most likely use the product CD. Click Next. Figure 9-6 Input location selection panel 7. A panel (Figure 9-7) opens indicating that the Productivity Center for Fabric installer will be launched. At this point, the Suite Installer is invoking the Installer process for the individual agent install. If you install the Agent directly from the CD, without using the Suite Installer, you commence the process after this point. The Suite Installer masks a few displays from you when it calls the product installer. Click Next. Figure 9-7 Product Installer will be launched 324 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 345. 8. In the window that opens, select the language for installation. We chose English. Click Next. Figure 9-8 Welcome screen 9. As In Figure 9-9 on page 326, specify the Fabric Manager Name and Fabric Manager Port Number. Type the Fabric Manager Name with the machine name where the Productivity Center for Fabric - Manager is installed. If it is a different domain, you must fully qualify the server name. In our case, colorado is the machine name of the server where Productivity Center for Fabric - Manager is installed. The port number is automatically inserted, but you can change it if you used a different port when you installed the Productivity Center for Fabric - Manager. Click Next. Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 325
  • 346. Figure 9-9 Fabric Manager name and port option 10.The Host Authentication password is entered in the panel shown in Figure 9-10. This was specified during the Agent Manager install and is used for agent installs. Figure 9-10 Host Authentication password 326 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 347. 11.In the next panel (Figure 9-11), you have the option to change the default installation directory. We clicked Next to accept the default. Figure 9-11 Selecting the installation directory 12.The Agent Information panel (Figure 9-12) asks you to specify a label which is applied to the Agent on this machine. We used the name of the machine. The port number is the port through which this Agent communicates. Click Next. Figure 9-12 Agent label and port Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 327
  • 348. 13.In the panel in Figure 9-13, specify the account that the Fabric Agent is to run under. We used the Administrator account. Figure 9-13 Fabric agent account 14.In the Agent Management Information panel (Figure 9-14 on page 329), enter the location of the Tivoli Agent Manager. In our configuration, colorado is the machine name in our Domain Name Server (DNS) where Tivoli Agent Manager is installed. The Registration Password is the password that you used when you installed Tivoli Agent Manager. Click Next. 328 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 349. Figure 9-14 Agent Manager information 15.Finally you see a confirmation panel (Figure 9-15) that shows the installation summary. Review the information and click Next. Figure 9-15 Installation summary panel Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 329
  • 350. 16.You see the installation status bar. Then you see a panel indicating a successful installation (Figure 9-16). Click Finish. Figure 9-16 Successful installation panel 17.The panel in Figure 9-17 indicates the successful install of the Fabric agent. Figure 9-17 Successful install of fabric agent panel. 18.You return to the Suite Installer window where you have the option to install other Agents. Click Cancel to finish. 330 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 351. Upon successful installation, you notice that nothing is added to the Start menu. The only evidence that the Agent is installed and running is that a Service is automatically started. Figure 9-18 shows the started Services in our Windows environment. Figure 9-18 Common Agent Service indicator If you look in the Control Panel, under Add/Remove Programs, there is now an entry for IBM TotalStorage Productivity Center for Fabric - Agent. To remove the Agent, you click this entry. 9.2.2 Installing Productivity Center for Fabric – Remote Console This section explains how to install the Productivity Center for Fabric – Remote Console. Pre-installation tasks Before you begin the installation, make sure that you have met the requirements that are discussed in the following sections. SNMP service installed Make sure that you have installed the SNMP service and have an SNMP community name of Public defined. For more information, see 3.10, “Installing SNMP” on page 73. Existing Tivoli NetView installation If you have an existing Tivoli NetView 7.1.4 installation, you can use it with Productivity Center for Fabric installation. If you have any other version installed, you must uninstall it before you install Productivity Center for Fabric – Remote Console. Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 331
  • 352. Installing the console The Productivity Center for Fabric Console remotely displays information about the monitored SAN. A user who has Administration rights must perform the installation. At this time of writing this redbook, this installation was supported on the Windows 2000 and Windows XP platforms. The following steps show a successful installation. We used the Suite Installer to install the Console, and the following windows reflect that process. 1. Select the language. We selected English. The next panel (see Figure 9-19) is the installer Welcome panel. Figure 9-19 Installer Welcome panel. 332 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 353. 2. The next screen (see Figure 9-20) is the Software License Agreement. Click I accept the terms in the license agreement and click Next to continue. Figure 9-20 License agreement panel 3. The first Suite Installer window (Figure 9-21) opens. Select the User interface Installations of Data, Fabric, and Replication option and then click Next. Figure 9-21 Suite Installer for selecting Console Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 333
  • 354. 4. In the next panel (Figure 9-22), select one or more remote GUI or command line components. To install the console, select The Productivity Center for Fabric - Remote GUI Client. Click Next. Figure 9-22 Selecting the Remote Console 5. In the installation confirmation panel (Figure 9-23), click Next. Figure 9-23 Remote GUI Client installation confirmation 334 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 355. 6. As shown in Figure 9-24, enter the location of the source code for the installation. In most cases, this the product CD drive. In our case, we installed the code from the E: drive. Click Next. Figure 9-24 Source code location panel 7. The next panel (Figure 9-25) indicates that the Fabric Installer will be launched. If you install the Agent directly from the CD, without using the Suite Installer, you begin the process after this point. The Suite Installer masks a few displays from you when it calls the product installer. Click Next. Figure 9-25 Installer will be launched Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 335
  • 356. 8. Select the language. We selected English. The Suite Installer launches the Fabric installer (see Figure 9-26). Figure 9-26 Productivity Center for Fabric installer launched 9. The InstallShield Wizard opens for IBM TotalStorage Productivity Center for Fabric - Console (see Figure 9-27, “InstallShield Wizard for Console” on page 336). Click Next. Figure 9-27 InstallShield Wizard for Console 336 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 357. 10.In the next panel, you can specify the location of the directory into which the product will be installed. Figure 9-28 shows the default location. Click Next. Figure 9-28 Default installation directory 11.Specify the name and port number of the host where the Productivity Center for Fabric Manager is installed. See Figure 9-29. Click Next. Figure 9-29 Productivity Center for Fabric Manager details Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 337
  • 358. 12.In the next panel (Figure 9-30), specify a starting port number from which the installer will allocate a series of ports for communication. We used the default. Click Next. Figure 9-30 Starting port number 13.Type the password that you will use for all remote consoles or that the managed hosts will use for authentication with the manager (see Figure 9-31). This password must be the same as the one you entered in the Fabric Manager Installation. Click Next. Figure 9-31 Host Authentication panel 338 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 359. 14.Specify the drive where NetView is to be installed or accept the default (see Figure 9-32). Click Next. Figure 9-32 Selecting the NetView installation drive 15.As shown in Figure 9-33, specify a password which will be used to run the NetView Service. Then click Next. Figure 9-33 NetView Service password Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 339
  • 360. 16.A panel opens that displays a summary of the installation (Figure 9-34). Click Next to begin the installation. Figure 9-34 Summary panel 17.The installation completes successfully as indicated by the message in the panel shown in Figure 9-35. Click Next. Figure 9-35 Installation successful message 340 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 361. 18.You are prompted to restart your machine (Figure 9-36). You may elect to restart immediately or at another time. We chose Yes, restart my computer. Click Finish. Figure 9-36 restart Computer request After rebooting your system, you see a new Service is automatically started, as shown in Figure 9-37. Figure 9-37 NetView Service To start the Remote Console, click Start →Programs →Tivoli NetView →NetView Console. Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 341
  • 362. 9.3 Configuring IBM TotalStorage Productivity Center for Fabric This section explains how to configure IBM TotalStorage Productivity Center for Fabric. 9.3.1 Configuring SNMP When using the IBM TotalStorage Productivity Center Suite Installer, the SNMP configuration is performed for you. If you install IBM TotalStorage Productivity Center for Fabric manually, then you need to configure the Productivity Center for Fabric. There are several ways to configure Productivity Center for Fabric for SNMP traps. Method 1: Forward traps to the local Tivoli NetView console In this scenario, you set up the devices to send SNMP traps to the NetView console, which is installed on the Productivity Center for Fabric Server. Figure 9-38 shows an example of this setup. Managed Host (Agent) Disk array Managed Host (Agent) Disk array Managed Host (Agent) Disk array SAN Switch SNMP Disk array Disk array Productivity Centre for Fabric Manager Figure 9-38 SNMP traps to local NetView console 342 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 363. NetView listens for SNMP traps on port 162, and the default community is public. When the trap arrives to the Tivoli NetView console, it is logged in the NetView Event browser and then forwarded to Productivity Center for Fabric as shown in Figure 9-39. Tivoli NetView is configured during installation of the Productivity Center for Fabric Server for trap forwarding to the Productivity Center for Fabric Server. Productivity Center for Fabric Server SNMP Trap TCP Tivoli NetView SAN Manager 162 fibre channel switch trapfrwd .conf (trap forwarding to TCP /IP port 9556 ) Figure 9-39 SNMP trap reception NetView forwards SNMP traps to the defined TCP/IP port, which is the sixth port derived from the base port defined during installation. We used the base port 9550, so the trap forwarding port is 9556. With this setup, the SNMP trap information appears in the NetView Event browser. Productivity Center for Fabric uses this information for changing the topology map. Note: If the traps are not forwarded to Productivity Center for Fabric, the topology map is updated based on the information coming from Agents at regular polling intervals. The default Productivity Center for Fabric Server installation (including the NetView installation) sets up the trap forwarding correctly. Existing NetView installation If you installed Productivity Center for Fabric with an existing NetView, you need to set up trap forwarding: 1. Configure the Tivoli NetView trapfrwd daemon. Edit the trapfrwd.conf file in the usrovconf directory. This file has two sections: Hosts and Traps. a. Modify the Hosts section to specify the host name and port to forward traps to (in our case, port 9556 on host COLORADO.ALMADEN.IBM.COM). b. Modify the Traps section to specify which traps Tivoli NetView should forward. The traps to forward for Productivity Center for Fabric are: 1.3.6.1.2 *(Includes MIB-2 traps (and McDATA’s FC Management MIB traps) 1.3.6.1.3 *(Includes FE MIB and FC Management MIB traps) 1.3.6.1.4 *(Includes proprietary MIB traps (and QLogic’s FC Management MIB traps)) Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 343
  • 364. Example 9-1 shows a sample trapfrwd.conf file. Example 9-1 trapfrwd.conf file [Hosts] #host1.tivoli.com 0 #localhost 1662 colorado.almaden.ibm.com 9556 [End Hosts] [Traps] #1.3.6.1.4.1.2.6.3 * #mgmt 1.3.6.1.2 * #experimental 1.3.6.1.3 * #Andiamo 1.3.6.1.4.1.9524 * #Brocade 1.3.6.1.4.1.1588 * #Cisco 1.3.6.1.4.1.9 * #Gadzoox 1.3.6.1.4.1.1754 * #Inrange 1.3.6.1.4.1.5808 * #McData 1.3.6.1.4.1.289 * #Nishan 1.3.6.1.4.1.4369 * #QLogic 1.3.6.1.4.1.1663 * [End Traps] 2. The trapfrwd daemon must be running before traps are forwarded. Tivoli NetView does not start this daemon by default. To configure Tivoli NetView to start the trapfrwd daemon, enter these commands at a command prompt: ovaddobj usrovlrftrapfrwd.lrf ovstart trapfrwd 3. To verify that trapfrwd is running, in NetView, select Options →Server Setup. In the Server Setup – Tivoli NetView window (Figure 9-40 on page 345), you see that trapfrwd is running. 344 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 365. Figure 9-40 Trapfwd daemon After trap forwarding is enabled, configure the SAN components, such as switches, to send their SNMP traps to the NetView console. Note: This type of setup gives you the best results, especially for devices where you cannot change the number of SNMP recipients and the destination ports. Method 2: Forward traps directly to Productivity Center for Fabric In this example, you configure the SAN devices to send SNMP traps directly to the Productivity Center for Fabric Server. The receiving port number is the primary port number plus six ports. In this case, traps are only used to reflect the topology changes and they are not shown in the NetView Event browser. Note: Some of the devices do not allow you to change the SNMP port. They only send traps to port 162. In such cases, this scenario is not useful. Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 345
  • 366. Method 3: Traps to the Productivity Center for Fabric and SNMP console In this example, you set up the SAN devices to send SNMP traps to both the Productivity Center for Fabric Server and to a separate SNMP console, which you installed in your organization. See Figure 9-41. p Managed Host (Agent) Disk array Managed Host (Agent) Disk array Managed Host (Agent) Disk array SAN Switch Disk array Disk array TotalStorage Productivity Center SNMP For Fabric Server Console port 162 Figure 9-41 SNMP traps for two destinations The receiving port number for the Productivity Center for Fabric Server is the primary port number plus six ports. The receiving port number for the SNMP console is 162. In this case traps are used to reflect the topology changes and they will display in the SNMP console events. The SNMP console, in this case, can be another Tivoli NetView installation or any other SNMP management application. For such a setup, the devices have to support setting multiple traps receivers and changing the trap destination port. Since this functionality is not supported in all devices, we do not recommend this scenario. 9.3.2 Configuring the outband agents Productivity Center for Fabric Server uses agents to discover the storage environment and to monitor the status. These agents are setup in the Agent Configuration panel. 1. From the NetView console, select SAN →Configuration →Configure Manager. 346 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 367. 2. The SAN Configuration window (Figure 9-42) opens. a. Select the Switches and Other SNMP Agents tab on the left side. Figure 9-42 Selecting switches and other SNMP agents b. You see the outband agents in the right panel. Define all the switches in the SAN that you want to monitor. To define such an Agent, click Add. c. The Enter IP Address window (Figure 9-43) opens. Enter the host name or IP address of the switch and click OK. Figure 9-43 Outband agent definition d. The agent appears in the agent list as shown in Figure 9-42. The state of the agent must be Contacted if you want Productivity Center for Fabric to get data from it. e. To remove an already defined agent, select it and click Remove. Defining a logon ID for zone information Productivity Center for Fabric can retrieve the zone information from IBM Fibre Channel Switches and from Brocade Silkworm Fibre Channel Switches. To accomplish this, Productivity Center for Fabric uses application programming interface (API) calls to retrieve zoning information. To use this API, Productivity Center for Fabric must login into the switch with administrative rights. If you want to see zoning information, you need to specify the login ID for the Agents you define. Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 347
  • 368. Here is the procedure: 1. In the SAN Configuration window (Figure 9-42 on page 347), select the defined Agent and click Advanced. 2. In the SNMP Agent Configuration window (Figure 9-44), enter the user name and password for the switch login. Click OK to save this information. Figure 9-44 Logon ID definition You can now see zone information for your switches. Tip: You must enter user ID and password information only for one switch in each SAN to retrieve the zoning information. We recommend that you enter this information for at least two switches for redundancy. Enabling more switches than necessary for API zone discovery may slow performance. 9.3.3 Checking inband agents After you install agents on the managed systems, as explained in 9.2.1, “Installing Productivity Center for Fabric – Agent” on page 321, the Agents should appear in the Agent Configuration window with an Agent state of Contacted (see Figure 9-42 on page 347). If the Agent does not appear in the panel, check the Agent log file for the cause. You can only remove Agents which are no longer responding to the server. Such Agents display a status of Not responding, as shown in Figure 9-45. Figure 9-45 Not responding inband Agent 348 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 369. 9.3.4 Performing an initial poll and setting up the poll interval After you set up the Agents and devices for use with the Productivity Center for Fabric Server, you perform the initial poll. You can manually poll using the SAN Configuration panel (Figure 9-46): 1. In NetView, select SAN →Configure. 2. In the SAN Configuration window, click Poll Now to perform a manual poll. Note: Polling takes time, and depends on the size of the SAN. 3. If you did not configure trap forwarding for the SAN devices, (as described in 9.3.1, “Configuring SNMP” on page 342), you must define the polling interval. In this case, the topology change will not be event driven from the devices, but will be updated regularly at the polling interval. You set the poll interval in the SAN Configuration panel (Figure 9-46). You can specify the polling interval in: – Minutes – Hours – Days: You can specify the time of the day for polling. – Weeks: You can specify the day of the week and time of the day for polling. After you set the poll interval, click OK to save the changes. Figure 9-46 SAN Configuration Tip: You do not need to configure the polling interval if all your devices are set to send SNMP traps to the local NetView console or the Productivity Center for Fabric Server. Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric 349
  • 370. 350 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 371. 10 Chapter 10. Deployment of agents Chapter 9, “Configuring IBM TotalStorage Productivity Center for Fabric” on page 319, covers the installation of the managers that are the central part of IBM TotalStorage Productivity Center. During that installation, the Resource Managers of Productivity Center for Data and Productivity Center for Fabric were installed and registered to a Tivoli Agent Manager, either an existing one, or one that was installed as a prerequisite in the first phase of the installation. This chapter explains how to set up the individual agents (subagents) on a managed host. The agents of Data Manager and Fabric Manager are called subagents, because they reside within the scope of the common agent. © Copyright IBM Corp. 2005. All rights reserved. 351
  • 372. 10.1 Installing the agents There are two ways to set up a new subagent on a host, depending on the state of the target machine: The common agent is not installed. In this case, install the software using an installer. The common agent is installed. In this case, deploy the agent from the data or fabric manager, or install it using the installer. To install the agent, follow these steps: 1. In the Suite Installer panel (Figure 10-1), select Agent installations of Data, Fabric, and CIM Agent. Click Next. Figure 10-1 Suite installer installation action 352 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 373. 2. In the next panel (Figure 10-2), select one or more agents to install. The options include the IBM TotalStorage Enterprise Storage Server (ESS) Common Information Model (CIM) Agent. However, this agent does not use any functions of Tivoli Agent Manager. Figure 10-2 Agent selection panel The next window asks you to enter the location of the installation code. Then the panel that follows tells you that the individual product installer is launched and you are asked to interact with it. Chapter 10. Deployment of agents 353
  • 374. 10.2 Data Agent installation using the installer After the product installer for IBM TotalStorage Productivity Center for Data starts, you can choose to install, uninstall, or apply maintenance to this component. Since no component of the Productivity Center for Data is installed on the server, only one option is available to install. 1. In the Install window (Figure 10-3), select Install Productivity Center for Data and click Next. Figure 10-3 IBM TotalStorage Productivity Center for Data installation action 354 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 375. 2. A window opens showing the license agreement (see Figure 10-4). Select the I have read and AGREE to abide by the license agreement above check box and click Next. Figure 10-4 License agreement 3. The License Agreement Confirmation window (Figure 10-5) opens. It asks you to confirm that you have read the license agreement. Click Yes. Figure 10-5 License Agreement Confirmation Chapter 10. Deployment of agents 355
  • 376. 4. In the next panel (Figure 10-6), choose the option of Productivity Center for Data that you want to install. To install the agent locally on the same machine on which the installer is currently running, select An agent on this machine and click Next. Figure 10-6 Installation options 356 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 377. 5. In the Productivity Center for Data Parameters panel (Figure 10-7), enter the server name and port of your Productivity Center for Data Manager. In our environment, the name of the server was gallium, and the port was the default, which is 2078. We did not need to use the fully qualified host name, but this may be different in your environment. Click Next. Figure 10-7 Data Manager server details The installer tries to contact the Data Manager server. If this is successful, you see a message like Server gallium:2078 connection successful - server parameters verified in the Progress Log section of the installation window Figure 10-7. 6. The installer checks whether a common agent is already installed on the machine. Because in our environment no common agent was installed on the machine, the installer issues the message No compatible Common Agents were discovered so one will be installed. See the Progress log in Figure 10-8 on page 358. Chapter 10. Deployment of agents 357
  • 378. 7. As shown in Figure 10-8, enter the parameters of the agent: a. Use the suggested default port 9510. b. Deselect Agent should perform a SCAN when first brought up because this may take a long time and you want to schedule this during the night. c. Leave Agent may run scripts sent by server as selected. d. The Agent Registration Information is the password that you specified during the installation of the Tivoli Agent Manager. Note: Do not change the common agent port, because this may prevent the deployment of agents later. e. Click Next to continue the installation. Figure 10-8 Parameter for the common agent and Data Agent 358 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 379. 8. In the Space Requirements panel (Figure 10-9), accept the default directory for the common agent installation. Click Next to proceed with the installation. Figure 10-9 Common Agent installation directory 9. If the directory that you specify does not exist, you see the message shown in Figure 10-10. Click OK to acknowledge this message and continue the installation. Figure 10-10 Creating the directory Chapter 10. Deployment of agents 359
  • 380. 10.Figure 10-11 shows the last panel before the installation starts. Review the progress log. If you want to review the parameters, click Prev to go to the previous panels. Then click Next. Figure 10-11 Review settings 360 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 381. 11.The installation starts and displays the progress in the Install Progress window (Figure 10-12). The progress bar is not shown in the picture, but you can see the messages in the progress log. When the installation is complete, click Done to end the installation. Figure 10-12 Installation progress 12.The installer closes now, and the Suite Installer is active again. It also reports the successful installation. Click Next to return to the panel shown in Figure 10-1 on page 352 to install another agent (for example a Fabric Agent) or click Cancel to exit the installation. 10.3 Deploying the agent The deployment of an agent is a convenient way to install a subagent onto multiple machines at the same time. You can also use this method if you do not want to install agents directly on each machine where an agent should be installed. The most important prerequisite software to install on the target machines is the common agent. If the common agent is not already installed on the target machine, the deployment will not work. For example, if you installed one of the two Productivity Center agents, on the targets, you can deploy the other agent using the methods described here. At the time of this writing, Suite Installer does not have the option to deploy agents, so you have to use the native installer setup.exe program for Fabric Manager. The packaging of Data Manager is different and you can use the Suite Installer to install it. Chapter 10. Deployment of agents 361
  • 382. Note: For agent deployment, you do not need to have the certificate files available, because the target machines already have the necessary certificates installed during the common agent installation. Data Agent You can perform this installation from any machine. It does not have to be the Data Manager server itself. When you use the Suite Installer, there is no option to deploy agents. However, you can choose to install an agent, to launch the Data Manager installer, and later deploy an agent instead of installing it (see Figure 10-6 on page 356). We did not use the Suite Installer for the agent deployment. 1. Start the installer by running setup.exe from the Data Manager installation CD. 2. After a few seconds, you see the panel shown in Figure 10-13. If you have the Data Manager or the agent already installed on that machine where you started the installer, select Uninstall Productivity Center for Data or Apply maintenance to Productivity Center for Data. Click Next. Figure 10-13 Productivity Center for Data Installation action 3. A window opens displaying the license agreement (see Figure 10-4 on page 355). Follow the steps as explained in steps 2 on page 355 and 3 on page 355. 362 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 383. 4. In the next window that opens (Figure 10-14), select Agents on other machines. Then click Next. Figure 10-14 Productivity Center for Data Install agents options Chapter 10. Deployment of agents 363
  • 384. 5. In the Productivity Center for Data Parameters panel (Figure 10-15), enter the Productivity Center for Data server name and the port number. Then click Next. Figure 10-15 Data Manager server details 364 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 385. 6. The installer tries to verify your input by connecting to the Data Manager server. The message Server gallium:2078 connection successful - server parameters verified is displayed in the progress log (see Figure 10-16) if it is successful. Click Next. 7. In our environment, we did not have a Windows domain, so we entered the details of the target machines manually. Click Manually Enter Agents. Figure 10-16 Select the Remote Agents to install: Manually entering Agents Chapter 10. Deployment of agents 365
  • 386. 8. In the Manually Enter Agent window (Figure 10-17), enter the IP address or host name of the target computer and a user ID and password of valid Windows users on that machine. You can only enter more than one machine here, if all the machines can be managed with the same user ID and password. Click OK after you enter all computers that can be managed with the same user ID. Figure 10-17 Manually Enter Agents panel 366 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 387. 9. The list with the computers that installs the subagent is updated and now appears as shown in Figure 10-18. If you want to install the subagent onto a second computer, but the computer uses a different user ID than the previous one, click Manually Enter Agents again to enter the information for that second computer. Repeat this step for every computer that uses a different user ID and password. After you enter all target computers, click Next. Figure 10-18 Selecting the Remote Agents to install: Computers targeted for a remote agent install 10.At this time, the installer tries to contact the common agent on target computers to get information about them. This may take a while, so at first you cannot select anything in the window that is presented next (see Figure 10-19 on page 368). Look at the progress log in the lower section of the window to determine what is currently happening. If the installer cannot contact the target computer, verify that the common agent is running. You can do that by looking at the status of the Windows services of the target machine. Another way is to open a telnet connection from a Command Prompt to that machine on port 9510. c:>telnet 9.1.38.104 9510 If the common agent is running, it listens for requests on that port and opens a connection. You simply see a blank screen. If the common agent is not running, you see the message Connecting To 9.1.38.104...Could not open a connection to host on port 9510 : Connect failed. When the installer is done with this step, you see the message Productivity Center for Data subagent an 9.1.38.104 will be installed at C:Program FilestivoliepTPCData. Deselect Agent should perform a SCAN when first brought up, because this may take a long time and you want to schedule this during the night. Click Install to start the deployment. Chapter 10. Deployment of agents 367
  • 388. Figure 10-19 Common agent status 11.When the deployment is finished, you see the message shown in Figure 10-20. Review the progress log. Click OK to end the installation. Figure 10-20 Agent deployment installation completed 368 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 389. Fabric Agent There are differences between the Data Agent deployment and the Fabric Agent deployment. To remotely deploy one or many fabric manager subagents, you must be logged on to the fabric manager server. This is different than the data subagent deployment where you can start the installation from any machine. At this time, there is no way to use the Suite Installer, so you have to use the native fabric manager installer. The Fabric Manager comes with a separate package for the Fabric Agent. Data Manager comes with only one installation program for all the possible install options (server, agent, remote agent or GUI). To start the deployment, you start the Fabric Manager installer. You do not start the installer for the Fabric Agent. 1. Launch setup.exe from the fabric manager installation media. 2. After a Java Virtual Machine is prepared and you select the language of the installer, a window opens that prompts you to select the type of installation to perform. See Figure 10-21. Select Remote Fabric Agent Deployment and click Next. Figure 10-21 Installation action 3. A Welcome window opens. Click Next. Chapter 10. Deployment of agents 369
  • 390. 4. The IBM License Agreement Panel (Figure 10-22) opens. Select I accept the terms in the license agreement and click Next. Figure 10-22 License agreement 5. The installer connects to the Tivoli Agent Manager and presents a list of hosts. Select the hosts to deploy the agents. See Figure 10-23. Click Next to start the deployment. Figure 10-23 Remote host selection 370 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 391. 6. The next panel (Figure 10-24) displays the selected hosts. Verify the information. You can click Back to change your selection or click Next to start the installation. Figure 10-24 Remote host confirmation 7. When the installation is completed, you see a summary window similar to the example in Figure 10-25. Click Finish. Figure 10-25 Agent Deployment summary Your agent should now be installed on the remote hosts. Chapter 10. Deployment of agents 371
  • 392. 372 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 393. Part 4 Part 4 Using the IBM TotalStorage Productivity Center In this part of the book we provide information about using the components of the IBM TotalStorage Productivity Center product suite. We include a chapter filled with hints and tips about setting up the IBM TotalStorage Productivity Center environment and problem determination basics, as well as a chapter on maintaining the DB2 database. © Copyright IBM Corp. 2005. All rights reserved. 373
  • 394. 374 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 395. 11 Chapter 11. Using TotalStorage Productivity Center for Disk This chapter provides information about the functions of the Productivity Center common base. Components of the Productivity Center common base include these topics: Launching and logging on to TotalStorage Productivity Center Launching device managers Performing device inventory collection Working with the ESS, DS6000, and DS8000 families Working with SAN Volume Controller Working with the IBM DS4000 family (formerly FAStT) Event management © Copyright IBM Corp. 2005. All rights reserved. 375
  • 396. 11.1 Productivity Center common base: Introduction Before using Productivity Center common base features, you need to perform some configuration steps. This will permit you to detect storage devices to be managed. Version 2.3 of Productivity Center common base permits you to discover and manage: ESS 2105-F20, 2105-800, 2105-750 DS6000 and DS8000 family SAN Volume Controller (SVC) DS4000 family (formally FAStT product range) Provided that you have discovered a supported IBM storage device, Productivity Center common base storage management functions will be available for drag-and-drop operations. Alternatively, right-clicking the discovered device will display a drop-down menu with all available functions specific to it. We review the available operations that can be performed in the sections that follow. Note: Not all functions of TotalStorage Productivity Center are applicable to all device types. For example, you cannot display the virtual disks on a DS4000 because the virtual disks concept is only applicable to the SAN Volume Controller. The sections that follow cover the functions available for each of the supported device types. 11.2 Launching TotalStorage Productivity Center Productivity Center common base along with TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication are accessed via the TotalStorage Productivity Center Launchpad (Figure 11-1) icon on your desktop. Select Manage Disk Performance and Replication to start the IBM Director console interface. Figure 11-1 TotalStorage Productivity Center launchpad 376 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 397. Alternatively access IBM Director from Windows Start → Programs → IBM Director → IBM Director Console Log on to IBM Director using the superuser id and password defined at installation. Please note that passwords are case sensitive. Login values are:- IBM Director Server: Hostname of the machine where IBM Director is installed User ID: The username to logon with. This is the superuser ID. Enter it in the form <hostname><username> Password: The case sensitive superuser ID password Figure 11-2 shows the IBM Director Login panel you will see after launching IBM Director. Figure 11-2 IBM Director Log on 11.3 Exploiting Productivity Center common base The Productivity Center common base module adds the Multiple Device Manager submenu task on the right-hand Tasks pane of the IBM Director Console as shown in Figure 11-3 on page 378. Note: The Multiple Device Manager product has been rebranded to TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. You will still see the name Multiple Device Manager in some panels and messages. Productivity Center common base will install the following sub-components into the Multiple Device Manager menu: Launch Device Manager Launch Tivoli SAN Manager (now called TotalStorage Productivity Center for Fabric) Manage CIMOMs Manage Storage Units (menu) – Inventory Status – Managed Disks – Virtual Disks – Volumes Chapter 11. Using TotalStorage Productivity Center for Disk 377
  • 398. Note: The Manage Performance and Manage Replication tasks that you see in Figure 11-3 become visible when TotalStorage Productivity Center for Disk or TotalStorage Productivity Center for Replication are installed. Although this chapter covers Productivity Center common base, you would have installed either TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, or both. Figure 11-3 IBM Director Console with Productivity Center common base 11.3.1 Launch Device Manager The Launch Device Manager task may be dragged onto an available storage device. For ESS, this will open the ESS Specialist window for a chosen device. For SAN Volume Controller, it will launch a browser session to that device. For DS4000 or FAStT devices, the function is not available. 11.4 Performing volume inventory This function is used to collect the detailed volume information from a discovered device and place it into the Productivity Center common base databases. You need to do this at least once before Productivity Center common base can start to work with a device. 378 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 399. When the Productivity Center common base functions are subsequently used to create/remove LUN’s the volume inventory is automatically kept up to date and it is therefore not necessary to repeatedly run inventory collection from the storage devices. Version 2.3 of Productivity Center common base does not currently contain the full feature set of all functions for the supported storage devices. This will make it necessary to use the storage devices own management tools for some tasks. For instance you can create new VDisks with Productivity Center common base on a SAN Volume Controller but you can not delete them. You will need to use the SAN Volume Controller’s own management tools to do this. For these types of changes to be reflected in Productivity Center common base an inventory collection will be necessary to re-synchronize the storage device and Productivity Center common base inventory. Attention: The use of volume inventory is common to ALL supported storage devices and must be performed before disk management functions are available. To start inventory collection, right-click the chosen device and select Perform Inventory Collection as shown in Figure 11-4. Figure 11-4 Launch Perform Inventory Collection A new panel will appear (Figure 11-5 on page 380) as a progress indication that the inventory process is running. At this stage Productivity Center common base is talking to the relevant CIMOM to collect volume information from the storage device. After a short while the information panel will indicated that the collection has been successful. You can now close this window. Chapter 11. Using TotalStorage Productivity Center for Disk 379
  • 400. Figure 11-5 Inventory collection in progress Attention: When the panel in Figure 11-5 indicates that the collection has been done successfully, it does not necessarily mean that the volume information has been fully processed by Productivity Center common base at this point. To track the detailed processing status, launch the Inventory Status task as seen in Figure 11-6. Figure 11-6 Launch Inventory Status 380 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 401. To see the processing status of an inventory collection, launch the Inventory Status task as shown in Figure 11-7. Figure 11-7 Inventory Status The example Inventory Status panel seen in Figure 11-7 shows the progress of the processing for a SAN Volume Controller. Use the Refresh button in the bottom left of the panel to update it with the latest progress. You can also launch the Inventory Status panel before starting an inventory collection to watch the process end to end. In our test lab the inventory process time for an SVC took around 2 minutes, end to end. Chapter 11. Using TotalStorage Productivity Center for Disk 381
  • 402. 11.5 Changing the display name of a storage device You can change the display name of a discovered storage device to something more meaningful to your organization. Right-click the chosen storage device (Figure 11-8) and select the Rename option. Figure 11-8 Changing the display name of a storage device Enter a more meaningful device name as in Figure 11-9 and click OK. Figure 11-9 Entering a user defined storage device name 382 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 403. 11.6 Working with ESS This section covers the Productivity Center common base functions that are available when managing ESS devices. There are two ways to access Productivity Center functions for a given device, and these can be seen in Figure 11-10: Tasks access: You will see in the right-hand task panel that there are a number of available tasks under the Manage Storage Units section. These management functions can be invoked by dragging them onto the chosen device. However, not all functions are applicable to all supported devices. Right-click access: To access all functions available for a specific device, simply right-click it to see a drop-down menu of options for that device. Figure 11-10 shows the drop-down menu for an ESS. Figure 11-10 also shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions, you would always have installed either TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, or both. Figure 11-10 Accessing Productivity Center common base functions Chapter 11. Using TotalStorage Productivity Center for Disk 383
  • 404. 11.6.1 ESS Volume inventory To view the status of the volumes available within a given ESS device, perform one of the following actions: Right-click the ESS device and select Volumes as in Figure 11-11. On the right-hand side under the Tasks column, drag Managed Storage Units → Volumes onto the storage device you want to query. Tip: Before volumes can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to view volumes for an ESS that has not been inventoried, you will receive a notification that this needs to be done. To perform an inventory collection, see 11.4, “Performing volume inventory” on page 378. Figure 11-11 Working with ESS volumes 384 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 405. In either case, in the bottom left corner, the status will change from Ready to Starting Task, and it will remain this way until the volume inventory appears. Figure 11-12 shows the Volumes panel for the select ESS device that will appear. Figure 11-12 ESS volume inventory panel 11.6.2 Assigning and unassigning ESS Volumes From the ESS volume inventory panel (Figure 11-12), you can modify existing volume assignments by either assigning a volume to a new host port(s) or by unassigning a host from an existing volume to host port(s) mapping. To assign a volume to a host port, select the volume then click the Assign host button on the right side of the volume inventory panel (Figure 11-12). You will be presented with a panel like the one shown below in Figure 11-13 on page 386. Select from the list of available host port world wide port names (WWPNs), and select either a single host port WWPN, or select more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for Volume assignment, click OK. Chapter 11. Using TotalStorage Productivity Center for Disk 385
  • 406. Figure 11-13 Assigning ESS LUN’s When you click OK, TotalStorage Productivity Center for Fabric will be called to assist with zoning this volume to the host. If TotalStorage Productivity Center for Fabric is not installed, you will see a message panel as shown in Figure 11-14. When the volume has been successfully assigned to the selected host port, the Assign host ports panel will disappear and the ESS Volumes panel will be displayed once again, reflecting now the additional host port mapping number in the far right side of the panel, in the Number of host ports column. Note: If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, “Using TotalStorage Productivity Center for Fabric” on page 703, for complete details of its operation. Also note that TotalStorage Productivity Center for Fabric is only invoked for zoning when assigning hosts to ports. It is not invoked to remove zones when hosts are unassigned. Figure 11-14 Tivoli SAN Manager warning 386 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 407. 11.6.3 Creating new ESS volumes To create new ESS volumes select the Create button from the Volumes panel as seen in Figure 11-12 on page 385. The Create volume panel will appear (Figure 11-15). Figure 11-15 ESS create volume Use the drop-down fields to select the Storage type and choose from Available arrays on the ESS. Then enter the number of volumes you want to create and the Volume quantity along with the Requested size. Finally select the host ports you want to have access to the new volumes from the Defined host ports scrolling list. You can select multiple hosts by holding down the control key <Ctrl> while clicking hosts. On clicking OK TotalStorage Productivity Center for Fabric will be called to assist with zoning the new volumes to the host(s). If TotalStorage Productivity Center for Fabric (formally known as TSANM) is not installed you will see a message panel as seen in Figure 11-16. If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, “Using TotalStorage Productivity Center for Fabric” on page 703 for complete details of its operation. Figure 11-16 Tivoli SAN Manager warning Chapter 11. Using TotalStorage Productivity Center for Disk 387
  • 408. Figure 11-17 Remove a host path from a volume Figure 11-18 Display ESS volume properties 11.6.4 Launch device manager for an ESS device This option allows you to link directly to the ESS Specialist of the chosen device: Right-click the ESS storage resource, and select Launch Device Manager. On the right-hand side under the Tasks column, drag Managed Storage Units → Launch Device Managers onto the storage device you want to query. 388 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 409. Figure 11-19 ESS specialist launched by Productivity Center common base 11.7 Working with DS8000 This section covers the Productivity Center common base functions that are available when managing DS8000 devices. There are two ways to access Productivity Center functions for a given device, and these can be seen in Figure 11-20 on page 390. Tasks access: You will see in the right-hand task panel that there are a number of available tasks under the Manage Storage Units section. These management function can be invoked by dragging them onto the chosen device. However, not all functions are applicable to all supported devices. Right-click access: To access all functions available for a specific device, simply right-click it to see a drop-down menu of options for that device. Figure 11-10 on page 383 shows the drop-down menu for an ESS. Figure 11-20 on page 390 also shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions, you would always have installed either TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, or both. Chapter 11. Using TotalStorage Productivity Center for Disk 389
  • 410. Figure 11-20 Accessing Productivity Center common base functions 11.7.1 DS8000 Volume inventory To view the status of the volumes available within a given DS8000 device, perform one of the following actions: Right-click the ESS device and select Volumes as in Figure 11-21 on page 391. On the right-hand side under the Tasks column, drag Managed Storage Units → Volumes onto the storage device you want to query. Tip: Before volumes can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to view volumes for an DS8000 that has not been inventoried, you will receive a notification that this needs to be done. To perform an inventory collection, see 11.4, “Performing volume inventory” on page 378. 390 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 411. Figure 11-21 Working with DS8000 volumes In either case, in the bottom left corner, the status will change from Ready to Starting Task, and it will remain this way until the volume inventory appears. Figure 11-22 shows the Volumes panel for the select DS8000 device that will appear. Figure 11-22 DS8000 volume inventory panel Chapter 11. Using TotalStorage Productivity Center for Disk 391
  • 412. 11.7.2 Assigning and unassigning DS8000 Volumes From the DS8000 volume inventory panel (Figure 11-12 on page 385) you can modify existing volume assignments by either assigning a volume to a new host port(s) or by unassigning a host from an existing volume to host port(s) mapping. To assign a volume to a host port, you can click the Assign host button on the right side of the volume inventory panel. You will be presented with a panel like the one in Figure 11-23. Select from the list of available host port world wide port names (WWPNs), and select either a single host port WWPN, or select more than one, by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for Volume assignment, click OK. . Figure 11-23 Assigning DS8000 LUN’s When you click OK, TotalStorage Productivity Center for Fabric will be called to assist with zoning this volume to the host. If TotalStorage Productivity Center for Fabric is not installed, you will see a message panel as seen in Figure 11-24 on page 393. When the volume has been successfully assigned to the selected host port, the Assign host ports panel will disappear and the ESS Volumes panel will be displayed once again, reflecting now the additional host port mapping number in the far right side of the panel, in the Number of host ports column. Note: If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, “Using TotalStorage Productivity Center for Fabric” on page 703 for complete details of its operation. Also note that TotalStorage Productivity Center for Fabric is only invoked for zoning when assigning hosts to ports. It is not invoked to remove zones when hosts are unassigned. 392 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 413. Figure 11-24 Tivoli SAN Manager warning 11.7.3 Creating new DS8000 volumes To create new ESS volumes, select the Create button from the Volumes panel as seen in Figure 11-12 on page 385. The Create volume panel will appear (Figure 11-25). Figure 11-25 DS8000 create volume Use the drop-down fields to select the Storage type and choose from Available arrays on the DS8000. Then enter the number of volumes you want to create and the Volume quantity along with the Requested size. Finally select the host ports you want to have access to the new volumes from the Defined host ports scrolling list. You can select multiple hosts by holding down the control key <Ctrl> while clicking hosts. On clicking OK, TotalStorage Productivity Center for Fabric will be called to assist with zoning the new volumes to the host(s). If TotalStorage Productivity Center for Fabric is not installed you will see a message panel as shown in Figure 11-26 on page 394. Chapter 11. Using TotalStorage Productivity Center for Disk 393
  • 414. I Figure 11-26 Tivoli SAN Manager warning 11.7.4 Launch device manager for an DS8000 device This option allows you to link directly to the DS8000 device manager of the chosen device: Right-click the DS8000 storage resource, and select Launch Device Manager. On the right-hand side under the Tasks column, drag Managed Storage Units → Launch Device Managers onto the storage device you want to query. We received a message that TotalStorage Productivity Center for Disk could not automatically logon (Figure 11-27). Click OK to get the DS8000 storage manager screen as shown in Figure 11-28 on page 395. Figure 11-27 DS8000 storage manager launch warning 394 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 415. Figure 11-28 shows the DS8000 device manager launched by Productivity Center common base. Figure 11-28 DS8000 device manager launched by Productivity Center common base Chapter 11. Using TotalStorage Productivity Center for Disk 395
  • 416. 11.8 Working with SAN Volume Controller This section covers the Productivity Center common base functions that are available when managing SAN Volume Controller subsystems. There are two ways to access Productivity Center functions for a given device, and these can be seen in Figure 11-29 on page 397: Tasks access: You will see in the right-hand task panel that there are a number of available tasks under the Manage Storage Units section. These management functions can be invoked by dragging them onto the chosen device. However, not all functions are appropriate to all supported devices. Right-click access: To access all functions available for a specific device, right-click it to see a drop-down menu of options for that device. Figure 11-29 on page 397 shows the drop-down menu for a SAN Volume Controller. Note: Overall, the SAN Volume Controller functionality offered in Productivity Center common base compared to that of the native SAN Volume Controller Web based GUI is fairly limited in version 2.1. There is the ability to add existing unmanaged LUNs to existing MDisk groups, but there are no tools to remove MDisks from a group or create/delete MDisk groups. The functions available for VDisks are similar too. Productivity Center common base can create new VDisks in a given MDisk group, but there is little other control over the placement of these volumes. It is not possible to remove VDisks or reassign them to other hosts using Productivity Center common base. 11.8.1 Working with SAN Volume Controller MDisks To view the properties of SAN Volume Controller managed disks (MDisk) as shown in Figure 11-30 on page 398, perform one of the following actions: Right-click the SVC storage resource, and select Managed Disks (Figure 11-29 on page 397). On the right-hand side under the Tasks column, drag Managed Storage Units → Managed Disks onto the storage device you want to query. Tip: Before SAN Volume Controller managed disk properties (MDisks) can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to use the Managed Disk function on a SAN Volume Controller that has not been inventoried, you will receive a notification that this needs to be done. Refer to 11.4, “Performing volume inventory” on page 378 for details on performing this operation. 396 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 417. Here is the panel for selecting managed disks (Figure 11-29). Figure 11-29 Select managed disk Chapter 11. Using TotalStorage Productivity Center for Disk 397
  • 418. Next, you should see the panel shown in Figure 11-30. Figure 11-30 The MDisk properties panel for SAN Volume Controller Figure 11-30 shows candidate or unmanaged MDisks, which are available for inclusion into an existing MDisk group. To add one or more unmanaged disks to an existing MDisk group: Select the MDisk group from the pull-down menu. Select one MDisk from the list of candidate MDisks, or use the <Ctrl> key to select multiple disks. Click the OK button at the bottom of the screen and the selected MDisk(s) will be added to the MDisk group (Figure 11-31 on page 399). 398 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 419. Figure 11-31 Add MDisk to a managed disk group 11.8.2 Creating new MDisks on supported storage devices Attention: The Create button, as seen in Figure 11-30 on page 398, is not for creating new MDisk groups. It is for creating new MDisks on storage devices serving the SAN Volume controller. It is not possible to create new MDisk groups using Version 2.3 of Productivity Center common base. Select the MDisk group from the pull-down menu (Figure 11-30 on page 398). Select the Create button. A new panel opens to create the storage volume (Figure 11-32 on page 400). Select a device accessible to the SVC (devices not marked by an asterisk). Devices marked with an asterisk are not acting as storage to the selected SAN Volume Controller. Figure 11-32 on page 400 shows an ESS with an asterisk next to it. This is because of the setup on the test environment. Make sure the device you select does not have an asterisk next to it. Specify the number of MDisks in the Volume quantity and size in the Requested volume size. Select the Defined SVC ports that should be assigned to these new MDisks. Chapter 11. Using TotalStorage Productivity Center for Disk 399
  • 420. Note: If TotalStorage Productivity Center for Fabric is installed and configured, extra panels will appear to create appropriate zoning for this operation. See Chapter 14, “Using TotalStorage Productivity Center for Fabric” on page 703 for details. Click OK to start a process that will create a new volume on the selected storage device and then add it to the SAN Volume Controllers MDisk group. Figure 11-32 Create volumes to be added as MDisks Productivity Center common base will now request the specified storage amount from the specified back-end storage device (see Figure 11-33). Figure 11-33 Volume creation results 400 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 421. The next step is to add the MDisks to an MDisk group (see Figure 11-34). Figure 11-34 Assign MDisk to an MDisk group Chapter 11. Using TotalStorage Productivity Center for Disk 401
  • 422. Figure 11-35 shows the result of adding the mdisk4 to the selected MDisk group. Figure 11-35 Result of adding the mdisk4 to the MDisk group 11.8.3 Create and view SAN Volume Controller VDisks To create or view the properties of SAN Volume Controller virtual disks (VDisk) as shown in Figure 11-36 on page 403, perform one of the following actions: Right-click the SVC storage resource, and select Virtual Disks. On the right-hand side under the Tasks column, drag Managed Storage Units → Virtual Disks onto the storage device you want to query. In version 2.3 of Productivity Center common base, it is not possible to delete VDisks. It is also not possible to assign or reassign VDisks to a host after the creation process. Keep this in mind when working with storage using Productivity Center common base on a SAN Volume Controller. These tasks can still be performed using the native SAN Volume Controller Web based GUI. 402 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 423. Tip: Before SAN Volume Controller virtual disk properties (VDisks) can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to use the Virtual Disk function on a SAN Volume Controller that has not been inventoried, you will receive a notification that this needs to be done. To perform an inventory collection, see 11.4, “Performing volume inventory” on page 378. Figure 11-36 Launch Virtual Disks Chapter 11. Using TotalStorage Productivity Center for Disk 403
  • 424. Viewing VDisks Figure 11-37 shows the VDisk inventory and volume attributes for the selected SAN Volume controller. Figure 11-37 The VDisk properties panel 404 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 425. Creating a VDisk To create a new VDisk, use the Create button as shown in Figure 11-37 on page 404. You need to provide a suitable VDisk name and select the MDisk group from which you want to create the VDisk. Specify the number of VDisks to be created and the size in megabytes or gigabytes that each VDisk should be. Figure 11-38 shows some example input in these fields. Figure 11-38 SAN Volume Controller VDisk creation The Host ports section of the VDisk properties panel allows you to use TotalStorage Productivity Center for Fabric functionality to perform zoning actions to provide VDisk access to specific host WWPNS. If TSANM is not installed, you will receive a warning If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, “Using TotalStorage Productivity Center for Fabric” on page 703 for details on how to configure and use it. Chapter 11. Using TotalStorage Productivity Center for Disk 405
  • 426. Figure 11-39 shows that the creation of the VDisk was successful. Figure 11-39 Volume creation results 11.9 Working with DS4000 family or FAStT storage This section covers the Productivity Center common base functions that are available when managing DS4000 and FAStT type subsystems. There are two ways to access Productivity Center functions for a given device, and these can be seen in Figure 11-40 on page 407: Tasks access: You will see in the right-hand task panel that there are a number of available tasks under the Manage Storage Units section. These management function can be invoked by dragging them onto the chosen device. However, not all functions are appropriate to all supported devices. Right-click access: To access all functions available for the selected device, right-click it to see a drop-down menu of options for it. Figure 11-40 on page 407 shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions you would always have either or both TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication installed. 406 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 427. 11.9.1 Working with DS4000 or FAStT volumes To view the status of the volumes available within a selected DS4000 or FAStT device, perform one of the following actions: Right-click the DS4000 or FAStT storage resource, and select Volumes (Figure 11-40). On the right-hand side under the Tasks column, drag Managed Storage Units → Volumes onto the storage device you want to query. In either case, in the bottom left corner, the status will change from Ready to Starting Task and it will remain this way until the volume inventory is completed (see Figure 11-41 on page 408). Note: Before DS4000 or FAStT volume properties can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. Refer to 11.4, “Performing volume inventory” on page 378 for details. Figure 11-40 Working with DS4000 and FAStT volumes Chapter 11. Using TotalStorage Productivity Center for Disk 407
  • 428. Figure 11-41 DS4000 and FAStT volumes panel Figure 11-41 shows the volume inventory for the selected device. From this panel you can Create and Delete volumes or assign and unassign volumes to hosts. 408 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 429. 11.9.2 Creating DS4000 or FAStT volumes To create new storage volumes on a DS4000 or FAStT, select the Create button from the right side of the Volumes panel (Figure 11-41 on page 408). You will be presented with the Create volume panel as in Figure 11-42. Figure 11-42 DS4000 or FAStT create volumes Select the desired Storage Type and array from Available arrays using the drop-down menus. Then enter the Volume quantity and Requested volume size of the new volumes. Finally select the host posts you wish to assign to the new volumes from the Defined host ports scroll box, holding the <Crtl> key to select multiple ports. The Defined host ports section of the panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 11-43. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 14, “Using TotalStorage Productivity Center for Fabric” on page 703 for details on how to configure and use it. Figure 11-43 Tivoli SAN Manager warning If TotalStorage Productivity Center for Fabric is not installed, click OK to continue. You should then see the panels shown below (Figure 11-43 through Figure 11-48 on page 412). Chapter 11. Using TotalStorage Productivity Center for Disk 409
  • 430. Figure 11-44 Volume creation results (1) Figure 11-45 Volume creation results (2) 410 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 431. Figure 11-46 Volume creation results (3) Figure 11-47 Volume creation results (4) Chapter 11. Using TotalStorage Productivity Center for Disk 411
  • 432. Figure 11-48 Volume creation results (5) 412 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 433. 11.9.3 Assigning hosts to DS4000 and FAStT Volumes Use this feature to assign hosts to an existing DS4000 or FAStT volume. To assign a DS4000 or FAStT volume to a host port, first select a volume by clicking it from the volumes panel (Figure 11-41 on page 408). Now click the Assign host button from the right side of the Volumes panel. You will be presented with a panel as shown in Figure 11-49. Select from the list of available host ports world wide port names (WWPNs), and select either a single host port WWPN, or more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for host assignment, click OK. Figure 11-49 Assign host ports to DS4000 or FAStT The Defined host ports section of the panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 11-50. If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, “Using TotalStorage Productivity Center for Fabric” on page 703 for details on how to configure and use it. Figure 11-50 Tivoli SAN Manager warning Chapter 11. Using TotalStorage Productivity Center for Disk 413
  • 434. If TotalStorage Productivity Center for Fabric is not installed, click OK to continue (Figure 11-51). Figure 11-51 DS4000 volumes successfully assigned to a host 11.9.4 Unassigning hosts from DS4000 or FAStT volumes To unassign a DS4000 or FAStT volume from a host port, first select a volume by clicking it from the volumes panel (Figure 11-41 on page 408). Now click the Unassign host button from the right side of the Volumes panel. You will be presented with a panel as in Figure 11-52. Select from the list of available host port world wide port names (WWPNs), and select either a single host port WWPN, or more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for host assignment, click OK. Note: If the Unassign host button is grayed out when you have selected a volume, this means that there are no current hosts assignments for that volume. If you believe this is incorrect, it could be that the Productivity Center common base inventory is out of step with this device’s configuration. This situation can arise when an administrator makes changes to the device outside of the Productivity Center common base interface. To correct this problem, perform an inventory for the DS4000 or FAStT and repeat. Refer to 11.4, “Performing volume inventory” on page 378 Figure 11-52 Unassign host ports from DS4000 or FAStT 414 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 435. TotalStorage Productivity Center for Fabric is not called to perform zoning cleanup in version 2.1. This functionality is planned in a future release. Figure 11-53 Volume unassignment results 11.9.5 Volume properties Figure 11-54 DS4000 or FAStT volume properties Chapter 11. Using TotalStorage Productivity Center for Disk 415
  • 436. 11.10 Event Action Plan Builder The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment. Understanding Event Action Plans An Event Action Plan associates one or more event filters with one or more actions. For example, an Event Action Plan can be created to send a page to the network administrator's pager if an event with a severity level of critical or fatal is received by the IBM Director Server. You can include as many event filter and action pairs as needed in a single Event Action Plan. An Event Action Plan is activated only when you apply it to a managed system or group. If an event targets a system to which the plan is applied and that event meets the filtering criteria defined in the plan, the associated actions are performed. Multiple event filters can be associated with the same action, and a single event filter can be associated with multiple actions. The list of action templates you can use to define actions are listed in the Actions pane of the Event Action Plan Builder window (see Figure 11-55). Figure 11-55 Action templates 416 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 437. Creating an Event Action Plan Event Action Plans are created in the Event Action Plan Builder window. To open this window from the Director Console, click the Event Action Plan Builder icon on the toolbar. The Event Action Plan Builder window is displayed (see Figure 11-56). Figure 11-56 Event Action Plan Builder Here are the tasks to create an Event Action Plan. 1. To begin, do one of the following actions: – Right-click Event Actions Plan in the Event Action Plans pane to access the context menu, and then select New. – Select File → New → Event Action Plan from the menu bar. – Double-click the Event Action Plan folder in the Event Action Plans pane (see Figure 11-57). Figure 11-57 Create Event Action Plan Chapter 11. Using TotalStorage Productivity Center for Disk 417
  • 438. 2. Enter the name you want to assign to the plan and click OK to save the new plan. The new plan entry with the name you assigned is displayed in the Event Action Plans pane. The plan is also added to the Event Action Plans task as a child entry in the Director Console (see Figure 11-58). Now that you have defined an event action plan, you can assign one or more filters and actions to the plan. Figure 11-58 New Event Action Plan Notes: You can create a plan without having defined any filters or actions. The order in which you build a filter, action, and Event Action Plan does not matter. 3. Assign at least one filter to the Event Action Plan using one of the following methods: – Drag the event filter from the Event Filters pane to the Event Action Plan in the Event Action Plans pane. – Highlight the Event Action Plan, then right-click the event filter to display the context menu and select Add to Event Action Plan. – Highlight the event filter, then right-click the Event Action Plan to display the context menu and select Add Event Filter (see Figure 11-59 on page 419). 418 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 439. Figure 11-59 Add events to the action plan The filter is now displayed as a child entry under the plan (see Figure 11-60). Figure 11-60 Events added to action plan Chapter 11. Using TotalStorage Productivity Center for Disk 419
  • 440. 4. Assign at least one action to at least one filter in the Event Action Plan using one of the following methods: – Drag the action from the Actions pane to the target event filter under the desired Event Action Plan in the Event Action Plans pane. – Highlight the target filter, then right-click the desired action to display the context menu and select Add to Event Action Plan. – Highlight the desired action, then right-click the target filter to display the context menu and select Add Action. The action is now displayed as a child entry under the filter (see Figure 11-61). Figure 11-61 Action as child of Display Events Action Plan 5. Repeat the previous two steps for as many filter and action pairings as you want to add to the plan. You can assign multiple actions to a single filter and multiple filters to a single plan. Note: The plan you have just created is not active because it has not been applied to a managed system or a group. In the next section we explain how to apply an Event Action Plan to a managed system or group. 420 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 441. 11.10.1 Applying an Event Action Plan to a managed system or group An Event Action Plan is activated only when it is applied to a managed system or group. To activate a plan: Drag the plan from the Tasks pane of the Director Console to a managed system in the Group Contents pane or to a group in the Groups pane. Drag the system or group to the plan. Select the plan, right-click the system or group, and select Add Event Action Plan (see Figure 11-62). Figure 11-62 Notification of Event Action Plan added to group/system(s) Repeat this step for all associations you want to make. You can activate the same Event Action Plan for multiple systems (see Figure 11-63). Figure 11-63 Director with Event Action Plan - Display Events Once applied, the plan is activated and displayed as a child entry of the managed system or group to which it is applied when the Associations - Event Action Plans item is checked. Chapter 11. Using TotalStorage Productivity Center for Disk 421
  • 442. Message Browser When an event occurs, the Message Browser (see Figure 11-64) pops up on the server console. Figure 11-64 Message Browser If the message has not yet been viewed, then that Status for that message will be blank. When viewed, a checked envelope icon will appear under the Status column next to the message. To see greater detail on a particular message, select the message in the left pain and click the Event Details button (see Figure 11-65). Figure 11-65 Event Details window 422 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 443. 11.10.2 Exporting and importing Event Action Plans With the Event Action Plan Builder, you can import and export action plans to files. This enables you to move action plans quickly from one IBM Director Server to another or to import action plans that others have provided. Export Event Action Plans can be exported to three types of files: Archive: Backs up the selected action plan to a file that can be imported into any IBM Director Server. HTML: Creates a detailed listing of the selected action plans, including its filters and actions, in an HTML file format. XML: Creates a detailed listing of the selected action plans, including its filters and actions, in an XML file format. To export an Event Action Plan, do the following steps: 1. Open the Event Action Plan Builder. 2. Select an Event Action Plan from those available under the Event Action Plan folder. 3. Select File → Export, then click the type of file you want to export to (see Figure 11-66). If this Event Action Plan will be imported by an IBM Director Server, then select Archive. Figure 11-66 Archiving an Event Action Plan Chapter 11. Using TotalStorage Productivity Center for Disk 423
  • 444. 4. Name the archive and set a location to save in the Select Archive File for Export window as shown in Figure 11-67. Figure 11-67 Select destination and file name Tip: When you export an action plan, regardless of the type, the file is created on a local drive on the IBM Director Server. If an IBM Director Console is used to access the IBM Director Server, then the file could be saved to either the Server or the Console by selecting Server or Local from the Destinations pull-down. It cannot be saved to a network drive. Use the File Transfer task if you want to copy the file elsewhere. 424 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 445. Import Event Action Plans can be imported from a file, which must be an Archive export of an action plan from another IBM Director Server. Follow these steps to import an Event Action Plan: 1. Transfer the archive file to be imported to a drive on the IBM Director Server. 2. Open the Event Action Plan Builder from the main Console window. 3. Click File → Import → Archive (see Figure 11-68). Figure 11-68 Importing an Event Action Plan 4. From the Select File for Import window (see Figure 11-69), select the archive file and location. The file must be located on the IBM Director Server. If using the Console, you must transfer the file to the IBM Director Server before it can be imported. Figure 11-69 Select file for import Chapter 11. Using TotalStorage Productivity Center for Disk 425
  • 446. 5. Click OK to begin the import process. The Import Action Plan window opens, displaying the action plan to import (see Figure 11-70). If the action plan had been assigned previously to systems or groups, you will be given the option to preserve associations during the import. Select Import to complete the import process. Figure 11-70 Verifying import of Event Action Plan 426 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 447. 12 Chapter 12. Using TotalStorage Productivity Center Performance Manager This chapter provides a step-by-step guide to help you configure and use the Performance Manager functions provided by the TotalStorage Productivity Center for Disk. © Copyright IBM Corp. 2005. All rights reserved. 427
  • 448. 12.1 Exploiting Performance Manager You can use the Performance Manager component of TotalStorage Productivity Center for Disk to manage and monitor the performance of the storage devices that TotalStorage Productivity Center for Disk supports. Performance Manager provides the following functions: Collecting data from devices Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS) and IBM TotalStorage SAN Volume Controller in the first release. Configuring performance thresholds You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria allows Performance Manager to notify you when a certain threshold has been crossed, thus enabling you to take action before a critical event occurs. Viewing performance data You can view performance data from the Performance Manager database using the gauge application programming interfaces (APIs). These gauges present performance data in graphical and tabular forms. Using Volume Performance Advisor (VPA) The Volume performance advisor is an automated tool that helps you select the best possible placement of a new LUN from a performance perspective. This function is integrated with Device Manager so that, when the VPA has recommended locations for requested LUNs, the LUNs can ne allocated and assigned to the appropriate host without going back to Device Manager. Managing workload profiles You can use Performance Manager to select a predefined workload profile or to create a new workload profile that is based on historical performance data or on an existing workload profile. Performance Manager uses these profiles to create a performance recommendation for volume allocation on an IBM storage server. The installation of the Performance Manager component onto an existing TotalStorage Productivity Center for Disk server provides a new ‘Manage Performance’ task tree (Figure 12-1) on the right-hand side of the TotalStorage Productivity Center for Disk host. This task tree includes the various elements shown. Figure 12-1 New Performance Manager tasks 428 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 449. 12.1.1 Performance Manager GUI The Performance Manager Graphical User Interface can be launched from the IBM Director console Interface. After logging on to IBM Director, you will see a screen as in Figure 12-2. On the rightmost Tasks pane, you will see Manage Performance launch menu. It is highlighted and expanded in the figure shown. Figure 12-2 IBM Director Console with Performance Manager 12.1.2 Performance Manager data collection To collect performance data for the Enterprise Storage Server (ESS), Performance Manager invokes the ESS Specialist server, setting a particular performance data collection frequency and duration of collection. Specialist collects the performance statistics from an ESS, establishes a connection, and sends the collected performance data to Performance Manager. Performance Manager then processes the performance data and saves it in Performance Manager database tables. From this section you can create data collection tasks for the supported, discovered IBM storage devices. There are two ways to use the Data Collection task to begin gathering device performance data. 1. Drag and drop the data collection task option from the right-hand side of the Multiple Device Manager application, onto the Storage Device for which you want to create the new task. Chapter 12. Using TotalStorage Productivity Center Performance Manager 429
  • 450. 2. Or, right-click a storage device in the center column, and select the Performance Data Collection Panel menu option as shown in Figure 12-3. Figure 12-3 ESS tasks panel Either operation results in a new window named Create Performance Data Collection Task (Figure 12-4). In this window you will specify: A task name A brief description of the task The sample frequency in minutes The duration of data collection task (in hours) Figure 12-4 Create Performance Data Collection Task for ESS 430 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 451. In our example, we are setting up a data collection task on an ESS with Device ID 2105.16603. After we have created a task name Cottle _ESS with sample frequency of 5 minutes and duration is 1 hour. It is possible to add more ESSs to the same data collection task, by clicking the Add button on the right-hand side. You can click individual devices, or select multiples by making use of the Ctrl key. See Figure 12-5 for an example of this panel. In our example, we created a task for the ESS with device ID 2105.22513. Figure 12-5 Adding multiple devices to a single task Chapter 12. Using TotalStorage Productivity Center Performance Manager 431
  • 452. Once we have established the scope of our data collection task and have clicked the OK button, we see our new data collection task available in the right-hand task column (see Figure 12-6). We have created task Cottle_ESS in the example. Tip: When providing a description for a new data collection task, you may elect to provide information about the duration and frequency of the task. Figure 12-6 A new data collection task 432 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 453. In order to schedule it, right-click the selected task (see Figure 12-7). Figure 12-7 Scheduling new data collection task You will see another window as shown in Figure 12-8. Figure 12-8 Scheduling task You have the option to use the job scheduling facility of TotalStorage Productivity Center for Disk, or to execute the task immediately. Chapter 12. Using TotalStorage Productivity Center Performance Manager 433
  • 454. If you select Execute Now, you will see a panel similar to the one in Figure 12-9, providing you with some information about task name and task status, including the time the task was initialized. Figure 12-9 Task progress panel If you would rather schedule the task to occur at a future time, or to specify additional parameters for the job schedule, you can walk through the panel in Figure 12-10. You may provide a scheduled job description for the scheduled job. In our example, we created a job, 24March Cottle ESS. Figure 12-10 New scheduled job panel 434 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 455. 12.1.3 Using IBM Director Scheduler function You may specify additional scheduled job parameters by using the Advanced button. You will see the panel in Figure 12-11. You can also launch this panel from IBM Director Console → Tasks → Scheduler → File → New Job. You can also set up the repeat frequency of the task. Figure 12-11 New scheduled job, advanced tab Once you are finished customizing the job options, you may save it using File → Save as menu. Or, you may do this by clicking the diskette icon in the top left corner of the advanced panel. Chapter 12. Using TotalStorage Productivity Center Performance Manager 435
  • 456. When you save with advanced job options, you may provide a descriptive name for the job as shown in Figure 12-12. Figure 12-12 Save job panel with advanced options You should receive a confirmation that your job has been saved as shown in Figure 12-13. Figure 12-13 scheduled job is saved 436 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 457. 12.1.4 Reviewing data collection task status You can review the task status using Task Status under the rightmost column Tasks. See Figure 12-14. Figure 12-14 Task Status Chapter 12. Using TotalStorage Productivity Center Performance Manager 437
  • 458. Upon double-clicking Task Status, it launches the following panel as shown in Figure 12-15. Figure 12-15 Task Status Panel To review the task status, you can click the task shown under the Task name column. For example, we selected the task FCA18P, which was aborted, as shown in Figure 12-16 on page 439. Subsequently, it will show the details with Device ID, Device status, and Error Message ID in the Device status box. You can click the entry in the device status box. It will further show the Error message in the Error message box. 438 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 459. Figure 12-16 Task status details 12.1.5 Managing Performance Manager Database The collected performance data is stored in a back-end DB2 database. This database needs to be maintained in order to keep only relevant data in the database. Your may decide on a frequency for purging old data based on your organization’s requirements. Chapter 12. Using TotalStorage Productivity Center Performance Manager 439
  • 460. The performance database panel can be launched on clicking Performance Database as shown in Figure 12-17. It will display the Performance Database Properties panel as shown in Figure 12-18 on page 441. Figure 12-17 Launch Performance Manager database 440 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 461. You can use the performance database panel to specify properties for a performance database purge task. The sizing function on this panel shows used space and free space in the database. You can choose to purge performance data based on age of the data, the type of the data, and the storage devices associated with the data (Figure 12-18). Figure 12-18 Properties of Performance database The Performance database properties panel shows the following data: Database name The name of the database Database location The file system on which the database resides. Total file system capacity The total capacity available to the file system, in gigabytes. Space currently used on file system Space is shown in gigabytes and also by percentage. Performance manager database full The amount of space used by Performance Manager. The percentage shown is the percentage of available space (total space - currently used space) used by the Performance Manager database. Chapter 12. Using TotalStorage Productivity Center Performance Manager 441
  • 462. The following formula is used to derive the percentage of disk space full in the Performance Manager database: a = the total capacity of the file system b = the total allocated space for Performance Manager database on the file system c = the portion of the allocated space that is used by the Performance Manager database For any decimal amount over a particular number, the percentage is rounded up to the next largest integer. For example, 5.1% is rounded to and displayed as 6%. Space status advisor The Space status advisor monitors the amount of space used by the Performance Manager database and advises you as to whether you should purge data. The advisor levels are: Low: You do not need to purge data now High: You should purge data soon Critical: You need to purge data now Disk space thresholds for status categories: low if utilization <0.8, high if 0.8 <= utilization <0.9 and critical otherwise. The delimiters between low/high/critical are 80% and 90% full. Purge database options Groups the database purge information. Name Type A name for the performance database purge task. The maximum length for a name can be from 1 to 250 characters. Description (optional) Type a description for the performance database purge task. The maximum length for a description can be from 1 to 250 characters. Device type Select one or more storage device types for the performance database purge. Options are SVC, ESS, or All. (Default is All.) Purge performance data older than Select the maximum age for data to be retained when the purge task is run. You can specify this value in days (1-365) or years (1-10). For example, if you select the Days button and a value of 10, the database purge task will purge all data older than 10 days when it is run. Therefore, if it has been more than 10 days since the task was run, all performance data would be purged. Defaults are 365 days or 10 years. Purge data containing threshold exception information Deselecting this option will preserve performance data that contains information about threshold exceptions. This information is required to display exception gauges. This option is selected by default. Save as task button When you click Save as task, the information you specified is saved and the panel closes. The newly created task is saved to the IBM Director Task pane under the Performance Manager Database. Once it is saved, the task can be scheduled using the IBM Director scheduler function. 442 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 463. 12.1.6 Performance Manager gauges Once data collection is complete, you may use the Gauges task to retrieve information about a variety of storage device metrics. Gauges are used to tunnel down to the level of detail necessary to isolate performance issues on the storage device. To view information collected by the Performance Manager, a gauge must be created or a custom script written to access the DB2 tables/fields directly. Creating a gauge Open the IBM Director and do one of the following tasks: Right-click the storage device in the center pane and select Gauges (see Figure 12-19). Figure 12-19 Right-click gauge opening You can click Gauges on the panel shown and it will produce the Job Status window as shown in Figure 12-21 on page 444. It is also possible to launch Gauge creation by expanding Multiple Device Manager - Manage Performance in the rightmost column. Drag the Gauges item to the storage device desired and drop to open the gauges for that device (see Figure 12-20 on page 444). Chapter 12. Using TotalStorage Productivity Center Performance Manager 443
  • 464. Figure 12-20 Drag-n-drop gauge opening This will produce the Job status window (see Figure 12-21) while the Performance gauges window opens. You will see the Job status window while other selected windows are opening. Figure 12-21 Opening Performance gauges job status The Performance gauges window will be empty until a gauge is created for use. We have created three gauges.(see Figure 12-22). Figure 12-22 Performance gauges Clicking the Create button to the left brings up the Job status window while the Create performance gauge window opens. 444 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 465. The Create performance gauge window changes values depending on whether the cluster, array, or volume items are selected in the left pane. Clicking the cluster item in the left pane produces a window as seen in Figure 12-23. Figure 12-23 Create performance gauge - Performance Under the Type pull-down menu, select Performance or Exception. Performance Cluster Performance gauges provide details on the average cache holding time in seconds as well as the percent of I/O requests that were delayed due to NVS memory shortages. Two Cluster Performance gauges are required per ESS to view the available historical data for each cluster. Additional gauges can be created to view live performance data. Device: Select the storage device and time period from which to build the performance gauge. The time period can be changed for this device within the gauge window, thus allowing an overall or detailed view of the data. Name: Enter a name that is both descriptive of the type of gauge as well as the detail provided by the gauge. The name must not contain white space, special characters, or exceed 100 characters in length. Also, the name must be unique on the TotalStorage Productivity Center for Disk Performance Manager Server. If “test” were used as a gauge name, then it could not be used for another gauge -— even if another storage device were selected — as it would not be unique in the database. Example names: 28019P_C1H would represent the ESS serial number (28019), the performance gauge type (P), the cluster (C1), and historical (H), while 28019E would represent the exception (E) gauge for the same ESS. Gauges for the clusters and arrays would build on that nomenclature to group the gauges by ESS on the Gauges window. Chapter 12. Using TotalStorage Productivity Center Performance Manager 445
  • 466. Description: Use this space to enter a detailed description of the gauge that will appear on the gauge and in the Gauges window. Metric(s): Click the metric(s) that will be display by default when the gauge is opened for viewing. Those metrics with the same value under the Units column in the Metrics table can be selected together using either Shift mouse-click or Ctrl mouse-click. The metrics in this field can be changed on the historical gauge after the gauge has been opened for viewing. In other words, a historical gauge for each metric or group of metrics is not necessary. However, these metrics cannot be changed for live gauges. A new gauge is required for each metric or group of metrics desired. Component: Select a single device from the Component table. This field cannot be changed when the gauge is opened for viewing. Data points: Selecting this radio button enables the gauge to display most recent data being obtained from currently running performance collectors against the storage device. One most recent performance data gauge is required per cluster and per metric to view live collection data. The Device pull-down menu displays text informing the user whether or not a performance collection task is running against this Device. You can select number of datapoints for your requirements to display the last “x” data points from the date of the last collection. The data collection could be currently running or the most recent one. Date Range: Selecting this radio button presents data over a range of dates/times. Enter the range of dates this gauge will use as a default for the gauge. The date and time values may be adjusted within the gauge to any value before or after the default values and the gauge will display any relevant data for the updated time period. Display gauge: Checking this box will display the newly created gauge after clicking the OK button. Otherwise, if left blank, the gauge will be saved without displaying. 446 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 467. Click the OK button when ready to save the performance gauge (see Figure 12-24). In this example, we have created a gauge with the name 22513C1H and the description, average cache holding time. We selected a starting and ending date as 11-March-2005. This corresponds with our data collection task schedule. Figure 12-24 Ready to save performance gauge The gauge appears after clicking the OK button with the Display gauge box checked or when the Display button is clicked after selecting the appropriate gauge on the Performance gauges window (see Figure 12-26 on page 448). If you decide not to display gauge and save only, then you will see a panel as shown here in Figure 12-25. Figure 12-25 Saved performance gauges Chapter 12. Using TotalStorage Productivity Center Performance Manager 447
  • 468. Figure 12-26 Cluster performance gauge - upper The top of the gauge contains the following labels: Graph Name The name of the gauge Description The description of the gauge Device The storage device selected for the gauge Component level Cluster, array, volume Component ID The ID # of the component (cluster, array, volume) Threshold The thresholds that were applied to the metrics Time of last data collection Date and time of the last data collection The center of the gauge contains the only fields that may be altered in the Display Properties section. The Metrics may be selected either individually or in groups as long as the data types are the same (for example, seconds with seconds, milliseconds with milliseconds, or percent with percent). Click the Apply button to force a Performance Gauge section update with the new y-axis data. The Start Date:, End Date:, Start Time:, and End Time: fields may be varied to either expand the scope of the gauge or narrow it for a more granular view of the data. Click the Apply button to force a Performance Gauge section update with the new x-axis data. For example, we applied Total I/O Rate metric to the saved gauge, and the resultant graph is shown in Figure 12-27 on page 449. Here, the Performance Gauge section of the gauge displays graphically, the information over time selected by the gauge and the options in the Display Properties section. 448 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 469. Figure 12-27 Cluster performance gauge with applied I/O rate metric Click the Refresh button in the Performance Gauge section to update the graph with the original metrics and date/time criteria. The date and time of the last refresh appear to the right of the Refresh button. The date and time displayed are updated first, followed by the contents of the graph, which can take up to several minutes to update. Finally, the data used to generate the graph is displayed at the bottom of the window (see Figure 12-28 on page 450). Each of the columns in the data section can be sorted up or down by clicking the column heading (see Figure 12-32 on page 453). The sort reads the data from left to right, so the results may not be as expected. The gauges for the array and volume components function in the same manner as the cluster gauge created above. Chapter 12. Using TotalStorage Productivity Center Performance Manager 449
  • 470. Figure 12-28 Create Performance Gauge- Lower Exception Exception gauges display data only for those active thresholds that were crossed during the reporting period. One Exception gauge displays threshold exceptions for the entire storage device based on the thresholds active at the time of collection. 450 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 471. To create an exception gauge, select Exception from the Type pull-down menu (see Figure 12-29). Figure 12-29 Create performance gauge - Exception By default, the Cluster will be highlighted in the left pane and the metrics and component sections will not be available. Device: Select the storage device and time period from which to build the performance gauge. The time period can be changed for this device within the gauge window thus allowing an overall or detailed view of the data. Name: Enter a name that is both descriptive of the type of gauge as well as the detail provided by the gauge. The name must not contain white space, special characters, or exceed 100 characters in length. Also, the name must be unique on the TotalStorage Productivity Center for Disk Performance Manager Server. Description: Use this space to enter a detailed description of the gauge that will appear on the gauge and in the Gauges window Date Range: Selecting this radio button presents data over a range of dates/times. Enter the range of dates this gauge will use as a default for the gauge. The date and time values may be adjusted within the gauge to any value before or after the default values and the gauge will display any relevant data for the updated time period. Display gauge: Checking this box will display the newly created gauge after clicking the OK button. Otherwise, if left blank, the gauge will be saved without displaying. Chapter 12. Using TotalStorage Productivity Center Performance Manager 451
  • 472. Click the OK button when ready to save the performance gauge. We created an exception gauge as shown in Figure 12-30. Figure 12-30 Ready to save exception gauge The top of the gauge contains the following labels: Graph Name The name of the gauge Description The description of the gauge Device The storage device selected for the gauge Threshold The thresholds that were applied to the metrics Time of last data collection Date and time of the last data collection The center of the gauge contains the only fields that may be altered in the Display Properties section. The Start Date: and End Date: fields may be varied to either expand the scope of the gauge or narrow it for a more granular view of the data. Click the Apply button to force an Exceptions Gauge section update with the new x-axis data. The Exceptions Gauge section of the gauge displays graphically, the information over time selected by the gauge, and the options in the Display Properties section (see Figure 12-31 on page 453). 452 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 473. Figure 12-31 Exceptions gauge - upper Click the Refresh button in the Exceptions Gauge section to update the graph with the original date criteria. The date and time of the last refresh appear to the right of the Refresh button. The date and time displayed are updated first, followed by the contents of the graph, which can take up to several minutes to update. Finally, the data used to generate the graph are displayed at the bottom of the window. Each of the columns in the data section can be sorted up or down by clicking the column heading (see Figure 12-32). Figure 12-32 Data sort options Chapter 12. Using TotalStorage Productivity Center Performance Manager 453
  • 474. Display Gauges To display previously created gauges, either right-click the storage device and select gauges (see Figure 12-19 on page 443) or drag and drop the Gauges item on the storage device (see Figure 12-20 on page 444) to open the Performance gauges window, shown here in Figure 12-33). Figure 12-33 Performance gauges window Select one of the gauges and then click Display. Gauge Properties The Properties button allows the following fields or choices to be modified. Performance These are the performance related possibilities: Description Metrics Component Data points Date range — date and time ranges 454 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 475. You can change the data displayed in the gauge from Data points with an active data collection to Date range (see Figure 12-34). Selecting Date range allows you to choose the Start date and End Date using the performance data stored in the DB2 database. Figure 12-34 Performance gauge properties Chapter 12. Using TotalStorage Productivity Center Performance Manager 455
  • 476. Exception You can change the Type property of the gauge definition from Performance to Exception. For a gauge type of Exception, you can only choose to view data for a Date range (see Figure 12-35). Figure 12-35 Exception gauge properties Delete a gauge To delete a previously created gauge, either right-click the storage device and select gauges (see Figure 12-19 on page 443) or drag and drop the Gauges item on the storage device (see Figure 12-20 on page 444) to open the Performance gauges window shown in Figure 12-33 on page 454. Select the gauge to remove and click Delete. A pop-up window will prompt for confirmation to remove the gauge (see Figure 12-36). Figure 12-36 Confirm gauge removal 456 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 477. To confirm, click Yes and the gauge will be removed. The gauge name may now be reused, if desired. 12.1.7 ESS thresholds Thresholds are used to determine watermarks for warning and error indicators for an assortment of storage metrics, including: Disk Utilization Cache Holding Time NVS Cache Full Total I/O Requests Thresholds are used either by: 1. Right-clicking a storage device in the center panel of TotalStorage Productivity Center for Disk, and selecting the thresholds menu option (Figure 12-37) 2. Or, by dragging and dropping the thresholds task from the right tasks panel in Multiple Device Manager, onto the desired storage device, to display or modify the thresholds for that device Figure 12-37 Opening the thresholds panel Chapter 12. Using TotalStorage Productivity Center Performance Manager 457
  • 478. Upon opening the thresholds submenu, you will see the following display, which shows the default thresholds in place for ESS, as shown in Figure 12-38. Figure 12-38 Performance Thresholds main panel On the right-hand side, there are buttons for Enable, Disable, Copy Threshold Properties, Filters, and Properties. If the selected task is already enabled, then the Enable button will appear greyed out, as in our case. If we attempt to disable a threshold that is currently enabled, by clicking the Disable button, a message will be displayed as shown in Figure 12-39. Figure 12-39 Disabling threshold warning panel You may elect to continue, and disable the selected threshold, or to cancel the operation by clicking Don’t disable threshold. 458 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 479. The copy threshold properties button will allow you to copy existing thresholds to other devices of similar type (ESS, in our case). The window in Figure 12-40 is displayed. Figure 12-40 Copying thresholds panel Note: As shown in Figure 12-40, the copying threshold panel is aware that we have registered on our ESS CIM agent host both clusters of our model 800 ESS, as indicated by the semicolon delimited IP address field for the device ID “2105.22219”. The Filters window is another available thresholds option. From this panel, you can enable, disable, and modify existing filter values against selected thresholds as shown in Figure 12-41. Figure 12-41 Threshold filters panel Chapter 12. Using TotalStorage Productivity Center Performance Manager 459
  • 480. Finally, you can open the properties panel for a selected threshold, and are shown the panel in Figure 12-42. You have options to acknowledge the values at their current settings, or modify the warning or error levels, or select the alert level (none, warning only, and warning or error are the available options). Figure 12-42 Threshold properties panel 12.1.8 Data collection for SAN Volume Controller Performance Manager uses an integrated configuration assistant tool (ICAT) interface of a SAN Volume Controller (SVC) to start and stop performance statistics collection on an SAN Volume Controller device. The process for performing data collection on SAN Volume Controller is similar to that of ESS. You will need to setup a new Performance Data Collection Task for the SAN Volume Controller device. Figure 12-43 on page 461 is an example of the panel you should see when you drag the Data Collection task onto the SAN Volume Controller device, or right-click the device and left-click Data Collection. As with the ESS data collection task: Define a task name and description Select sample frequency and duration of the task and click OK Note: The SAN Volume Controller can perform data collection at a minimum 15 minute interval. You may use the Add button to include additional SAN Volume Controller devices in the same data collection task, or use the Remove button to exclude SAN Volume Controllers from an existing task. In our case we are performing data collection against a single SAN Volume Controller. 460 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 481. Figure 12-43 The SVC Performance Data Collection Task As long as at least one data collection task has been completed, you are able to proceed with the steps to create a gauge to view your performance data. 12.1.9 SAN Volume Controller thresholds To view the available Performance Manager Thresholds, you can right-click the SAN Volume Controller device and click Thresholds, or drag the Threshold task from the right-hand panel onto the SAN Volume Controller device you want to query. A panel like the one in Figure 12-44 appears. Figure 12-44 The SVC performance thresholds panel SVC has following thresholds with their default properties: VDisk I/Os rate Total number of virtual disk I/Os for each I/O group. SAN Volume Controller defaults: – Status: Disabled – Warning: None – Error: None VDisk bytes per second Virtual disk bytes per second for each I/O group. SAN Volume Controller defaults: – Status: Disabled – Warning: None – Error: None Chapter 12. Using TotalStorage Productivity Center Performance Manager 461
  • 482. MDisk I/O rate Total number of managed disk I/Os for each managed disk group. SAN Volume Controller defaults: – Status: Disabled – Warning: None – Error: None MDisk bytes per second Managed disk bytes per second for each managed disk group. SAN Volume Controller defaults: – Status: Disabled – Warning: None – Error: None You may only enable a particular threshold once minimum values for warning and error levels have been defined. If you attempt to select a threshold and enable it without first modifying these values, you will see a notification like the one in Figure 12-45. Figure 12-45 SAN Volume Controller threshold enable warning Tip: In TotalStorage Productivity Center for Disk, default threshold warning or error values of -1.0 are indicators that there is no recommended minimum value for the threshold and are therefore entirely user defined. You may elect to provide any reasonable value for these thresholds, keeping in mind the workload in your environment. To modify the warning and error values for a given threshold, you may select the threshold, and click the Properties button. The panel in Figure 12-46 will be shown. You can modify the threshold as appropriate, and accept the new values by selecting the OK button. Figure 12-46 Modifying threshold warning and error values 462 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 483. 12.1.10 Data collection for the DS6000 and DS8000 .The process for performing data collection on DS6000/DS8000 is similar to that of ESS. You will need to set up a new Performance Data Collection Task for the DS6000/DS8000 device. Figure 12-47 is an example of the panel you should see when you drag the Data Collection task onto the SAN Volume Controller device, or right-click the device and left-click Data Collection. Figure 12-48 shows user validation. As with the ESS data collection task: Define a task name and description Select sample frequency and duration of the task and click OK. Figure 12-47 DS6000/DS8000 user name and password Figure 12-48 The DS6000/DS8000 Data Collection Task As long as at least one data collection task has been completed, you are able to proceed with the steps to create a gauge to view your performance data (Figure 12-49 on page 464 through Figure 12-51 on page 466). Chapter 12. Using TotalStorage Productivity Center Performance Manager 463
  • 484. Figure 12-49 DS6000/DS8000 Cluster level gauge values 464 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 485. Figure 12-50 DS6000/DS8000 Rank Group level gauges Chapter 12. Using TotalStorage Productivity Center Performance Manager 465
  • 486. Figure 12-51 DS6000/DS8000 Volume level gauges 12.1.11 DS6000 and DS8000 thresholds To view the available Performance Manager Thresholds, you can right-click the DS6000/Ds8000 device and click Thresholds, or drag the Threshold task from the right-hand panel onto the DS6000/DS8000 device you want to query. A panel like the one in Figure 12-52 appears. Figure 12-52 The DS6000/DS8000 performance thresholds panel 466 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 487. You may only enable a particular threshold once minimum values for warning and error levels have been defined. If you attempt to select a threshold and enable it without first modifying these values, you will see a notification like the one in Figure 12-45 on page 462. Figure 12-53 DS6000/DS8000 threshold enable warning Tip: In TotalStorage Productivity Center for Disk, default threshold warning or error values of -1.0 are indicators that there is no recommended minimum value for the threshold and are therefore entirely user defined. You may elect to provide any reasonable value for these thresholds, keeping in mind the workload in your environment. To modify the warning and error values for a given threshold, you may select the threshold, and click the Properties button. The panel in Figure 12-46 on page 462 will be shown. You can modify the threshold as appropriate, and accept the new values by clicking the OK button. Figure 12-54 Modifying DS6000/DS8000 threshold warning and error values 12.2 Exploiting gauges Gauges are a very useful tool and help in identifying performance bottlenecks. In this section we show the drill down capabilities of gauges. The purpose of this section is not to cover performance analysis in detail for a specific product, but to highlight capabilities of the tool. You may adopt and use a similar approach for the performance analysis. Chapter 12. Using TotalStorage Productivity Center Performance Manager 467
  • 488. 12.2.1 Before you begin Before you begin with customizing gauges, ensure that enough correct samples of data are collected in the performance database. This is true for any performance analysis. The data samples you collect must cover an appropriate time period that corresponds with high/low instances of the I/O workload. Also, the samples should cover sufficient iterations of the peak activity to perform analysis over a period of time. This is true for analyzing a pattern. You may use the advanced scheduler function of IBM Director to configure a repetitive task. If you plan to perform analysis for one specific instance of activity, then you can ensure that the performance data collection task covers the specific time period. 12.2.2 Creating gauges: an example In this example, we will cover creation and customization of gauges for ESS. First of all, we scheduled an ESS performance data collection task at every 3-hour interval using the IBM Director scheduler function for 8 days. For details on using the IBM Director scheduler, refer to 12.1.3, “Using IBM Director Scheduler function” on page 435. For creating the gauge, we launched the Performance gauges panel as shown in Figure 12-55, by right-clicking the ESS device. Figure 12-55 Gauges panel 468 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 489. Click the Create button to create a new gauge. You will see a panel similar to Figure 12-56. Figure 12-56 Create performance gauge We selected Cluster in the top left corner, Total I/O Rate metric in the metrics box, and Cluster 1 in the component box. Also, we entered the following parameters: Name: 22219P_drilldown_analysis Description: Eiderdown analysis for 22219 ESS Chapter 12. Using TotalStorage Productivity Center Performance Manager 469
  • 490. For the Date range, we selected our historical data collection sampling period and clicked Display gauge. Upon clicking OK, we got the next panel as shown in Figure 12-57. Figure 12-57 Gauge for ESS 22219 Cluster performance 470 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 491. 12.2.3 Zooming in on the specific time period The previous chart shows some peaks of high cluster I/O rate between the period from April 6th to 8th. We decided to zoom into the peak activity and hence selected a more narrow time period as shown in Figure 12-58 and clicked the Apply button. Figure 12-58 Zooming on specific time period for Total IO rate metric 12.2.4 Modify gauge to view array level metrics For the next chart, we decided to have an array level metric for the same time period as before. Hence, we selected the gauge that we created earlier and clicked Properties as shown in Figure 12-59. Figure 12-59 Properties for a defined gauge Chapter 12. Using TotalStorage Productivity Center Performance Manager 471
  • 492. The subsequent panel is shown in Figure 12-60. We selected Array level metric for Cluster 1, Device Adapter 1, Loop A, and Disk Group 2 for Avg. Response time as circled in Figure 12-60. Figure 12-60 Customizing gauge for array level metric 472 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 493. The resultant chart is shown in Figure 12-61. Figure 12-61 Modified gauge with Avg. response time chart Chapter 12. Using TotalStorage Productivity Center Performance Manager 473
  • 494. 12.2.5 Modify gauge to review multiple metrics in same chart Next, we decided to review Total I/O, read/sec and writes/sec in the same chart for comparison purpose. We selected these three metrics in the Gauge properties panel and clicked the Apply button. The resultant chart is shown in Figure 12-62. Tip: For selecting multiple metrics in the same chart, click the first metric, hold the shift key, and click the last metric. If the metrics you plan to choose are not in the continuous list, but are separated, then hold the control key instead of the shift key. Figure 12-62 Viewing multiple metrics in the same chart The chart Writes and Total I/O are shown as overlapping and Reads are shown as zero. Tip: If you select multiple metrics that do not have the same units for the y-axis, then the error is displayed as shown in Figure 12-63. Figure 12-63 Error displayed if there are no common units 474 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 495. 12.3 Performance Manager command line interface The Performance Manager module includes a command line interface known as ‘perfcli’, located in the directory c:Program FilesIBMmdmpmpmcli. In its present release, the perfcli utility includes support for ESS and SAN Volume Controller data collection task creation and management (starting and stopping data collection tasks). There are also executables that support viewing and management of task filters, alert thresholds, and gauges. There is detailed help available at the command line, with information about syntax and specific examples of usage. 12.3.1 Performance Manager CLI commands The Performance Manager Command Line Interface (perfcli) includes the following commands shown in Figure 12-64. Figure 12-64 Directory listing of the perfcli commands startesscollection/startsvccollection: These commands are used to build and run data collection against the ESS or SAN Volume Controller, respectively. lscollection: This command is used to list the running, aborted, or finished data collection tasks on the Performance Management server. stopcollection: This command may be used to stop data collection against a specified task name. lsgauge: You can use the lsgauge command to display a list of existing gauge names, types, device types, device IDs, modified dates, and description information. rmgauge: Use this command to remove existing gauges. showgauge: This command is used to display performance data output using an existing defined gauge. setessthresh/setsvcthresh: These two commands are respectively used to set ESS and SAN Volume Controller performance thresholds. cpthresh: You can use the cpthresh command to copy threshold properties from one selected device to one or more other devices. setfilter: You can use setfilter to set or change the existing threshold filters. lsfilter: This command may be used to display the threshold filter settings for all devices specified. Chapter 12. Using TotalStorage Productivity Center Performance Manager 475
  • 496. setoutput: This command may be used to view or modify the existing data collection output formats, including settings for paging, row printing, format (default, XML, or character delimited), header printing, and output verbosity. lsdev: This command can be used list the storage devices that are used by TotalStorage Productivity Center for Disk. lslun: This command can be used list the LUNs or Performance Manager volumes associated with storage devices. lsthreshold: This command can be used to list the threshold status associated with storage devices. lsgauge: This command can be used list the existing gauge names, gauge type, device name, device ID, date modified, and optionally device information. showgauge: Use this command to display performance output by triggering an existing gauge. showcapacity: This command displays managed capacity, the sum of managed capacity by device type, and the total of all ESS and SAN Volume Controller managed storage. showdbinfo: This command displays percent full, used space, and free space of the Performance Manager database. lsprofile: Use this command to display Volume Performance Advisor profiles. cpprofile: Use this command to copy Volume Performance Advisor profiles. mkprofile: Use this command to create a workload profile that you can use later with mkrecom command to create a performance recommendation for ESS volume allocation. mkreom: Use this command to generate and, optionally, apply a performance LUN advisor recommendation for ESS volumes. lsdbpurge: This command can be used to display the status of database purge tasks running in TotalStorage Productivity Center for Disk. tracklun: This command can be used to obtain historical performance statistics used to create a profile. startdbpurge: Use this command to start a database purge task. showdev: Use this command to display device properties. setoutput: This command sets output format for the administrative command line interface. cpthresh: This command can be used to copy threshold properties from one device to other devices that are of the same type. rmprofile: Use this command to remove delete performance LUN advisor profiles. 476 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 497. 12.3.2 Sample command outputs We show some sample commands in Figure 12-65. This sample shows how to invoke perfcli commands from the Windows command line interface. Figure 12-65 Sample perfcli command from Windows command line interface Figure 12-66 and Figure 12-67 show perfcli sample commands within the perfcli tool. Figure 12-66 perfcli sample command within perfcli tool Figure 12-67 perfcli lslun sample command within perfcli tool Chapter 12. Using TotalStorage Productivity Center Performance Manager 477
  • 498. 12.4 Volume Performance Advisor (VPA) The Volume Performance Advisor (VPA) is designed to be an expert advisor that recommends allocations for storage space based on considerations of the size of the request, an estimate of the performance requirement and type of workload, as well as the existing load on an ESS that might compete with the new request. The Volume Performance Advisor will then make a recommendation as to the number and size of Logical Unit Numbers (logical volumes or LUNs) to allocate, and a location within the ESS which is a good placement with respect to the defined performance considerations. The user is given the option of implementing the recommendation (allocating the storage), or obtaining subsequent recommendations. 12.4.1 VPA introduction Data placement within a large, complex storage subsystem has long been recognized as a storage and performance management issue. Performance may suffer if done casually or carelessly. It can also be costly to discover and correct those performance problems, adding to the total cost of ownership. Performance Manager is designed to contain an automated approach for storage allocation through the functions of a storage performance advisor. It is called the Volume Performance Advisor (VPA). The advisor is designed to automate decisions that could be achieved by an expert storage analyst given the time and sufficient information. The goal is to give very good advice by allowing VPA to consider the same factors that an administrator would in deciding where to best allocate storage. Note: At this point in time, the VPA tool is available for IBM ESS only. 12.4.2 The provisioning challenge You want to allocate a specific amount of storage to run a particular workload. You could be a storage administrator interacting through a user interface, or the user could be another system component (such as a SAN management product, file system, DataBase Management System (DBMS), or logical volume manager) interacting with the VPA Application Programming Interface (API). A storage request is satisfied by selecting some number of logical volumes (Logical Unit Numbers (LUNs). For example, if you ask for 400GB of storage, then a low I/O rate, cache-friendly workload could be handled on a single 400GB logical disk residing on a single disk array; whereas a cache-unfriendly, high-bandwidth application might need several logical volumes allocated across multiple disk arrays, using LVM, file system, or database striping to achieve the required performance. The performance of those logical disks depends on their placement on physical storage, and what other applications might be sharing the arrays. The job of the Volume Performance Advisor (VPA) is to select an appropriate set (number and placement) of logical disks that: Consider the performance requirements of the new workload Balance the workload across the physical resources Consider the effects of the other workloads competing for the resources 478 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 499. Storage administrators and application developers need tools that pull together all the components of the decision process used for provisioning storage. They need tools to characterize and manage workload profiles. They need tools to monitor existing performance, and tools to help them understand the impact of future workloads on current performance. What they need is a tool that automates this entire process, which is what VPA for ESS does. 12.4.3 Workload characterization and workload profiles Intelligent data placement requires a rudimentary understanding of the application workload, and the demand likely to be placed on the storage system. For example, cache-unfriendly workloads with high I/O intensity require a larger number of physical disks than cache-friendly or lightweight workloads. To account for this, the VPA requires specific workload descriptions to drive its decision-making process. These workload descriptions are precise, indicating I/O intensity rates; percentages of read, write, random, and sequential content; cache information; and transfer sizes. This workload-based approach is designed to allow the VPA to correctly match performance attributes of the storage with the workload attributes with a high degree of accuracy. For example, high random-write content workloads might best be pointed to RAID10 storage. High cache hit ratio environments can probably be satisfied with fewer numbers of logical disks. Most users have little experience or capability for specifying detailed workload characteristics. The VPA is designed to deal with this problem in three ways: Predefined workload definitions based on characterizations of environments across various industries and applications. They include standard OLTP type workloads, such as “OLTP High”, and “Batch Sequential”. Capturing existing workloads by observing storage access patterns in the environment. The VPA allows the user to point to a grouping of volumes and a particular window of time, and create a workload profile based on the observed behavior of those volumes. Creation of hypothetical workloads that are similar to existing profiles, but differ in some specific metrics. The VPA has tools to manage a library of predefined and custom workloads, to create new workload profiles, and to modify profiles for specific purposes. 12.4.4 Workload profile values It is possible to change many specific values in the workload profile. For example, the access density may be high because a test workload used small files. It can be adjusted to a more accurate number. Average transfer size always defaults to 8KB, and should be modified if other information is available for the actual transfer size. The peak activity information should also be adjusted. It defaults to the time when the profile workload was measured. In an existing environment it should specify the time period for contention analysis between existing workloads and the new workload. Figure 12-68 on page 480 shows a user defined VPA workload profile. Chapter 12. Using TotalStorage Productivity Center Performance Manager 479
  • 500. Figure 12-68 User defined workload profile details example 12.4.5 How the Volume Performance Advisor makes decisions As mentioned previously, the VPA is designed to take several factors into account when recommending volume allocation: Total amount of space required Minimum and maximum number of volumes, and sizes of volumes Workload requirements Contention from other workloads VPA tries to allocate volumes on the least busy resources, at the same time balancing workload across available resources. It uses the workload profile to estimate how busy internal ESS resources will become if that workload is allocated on those resources. So it estimates how busy the raid arrays, disk adapters, and controllers will become. The workload profile is very important in making that decision. For example, cache hit ratios affect the activity on the disk adapters and raid arrays. When creating a workload profile from existing data, it's important for you to pick a representative time sample to analyze. Also you should examine the IO/sec per GB. Many applications have access density in the range of 0.1 to 3.0. If it is significantly outside this range, then this might not be an appropriate sample. 480 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 501. The VPA will tend to utilize resources that can best accommodate a particular type of workload. For example, high write content will make Raid 5 arrays busier than RAID 10 and VPA will therefore bias to RAID 10. Faster devices will be less busy, so VPA biases allocations to the faster devices. VPA also analyzes the historical data to determine how busy the internal ESS components (arrays, disk adapters, clusters) are due to other workloads. In this way, VPA tries to avoid allocating on already busy ESS components. If VPA has a choice among several places to allocate volumes, and they appear to be about equal, it is designed to apply a randomizing factor. This keeps the advisor from always giving the same advice, which might cause certain resources to be overloaded if everyone followed that advice. This also means that several usages of VPA by the same user may not necessarily get the same advice, even if the workload profiles are identical. Note: VPA tries to allocate the fewest possible volumes, as long as it can allocate on low utilization components. If the components look too busy, it will allocate more (smaller) volumes as a way of spreading the workload.It will not recommend more volumes than the maximum specified by the user. VPA may however be required to recommend allocation on very busy components. A utilization indicator in the user panels will indicate whether allocations would cause components to become heavily utilized. The I/O demand specified in the workload profile for the new storage being allocated is not a Service Level Agreement (SLA). In other words, there is no guarantee that the new storage, once allocated, will perform at or above the specified access density. The VPA will make recommendations unless the available space on the target devices is exhausted. An invocation of VPA can be used for multiple recommendations. To handle a situation when multiple sets of volumes are to be allocated with different workload profiles, it is important that the same VPA wizard be used for all sets of recommendations. Select Make additional recommendations on the View Recommendations page, as opposed to starting a completely new sequence for each separate set of volumes to be allocated. VPA is designed to remember each additional (hypothetical) workload when making additional recommendations. There are, of course, limitations to the use of an expert advisor such as VPA. There may well be other constraints (like source and target Flashcopy requirements), which must be considered. Sometimes these constraints can be accommodated with careful use of the tool, and sometimes they may be so severe that the tool must be used very carefully. That is why VPA is designed as an advisor. In summary, the Volume Performance Advisor (VPA) provides you a tool to help automate complex decisions involved in data placement and provisioning. It short, it represents a future direction of storage management software! Computers should monitor their resources and make autonomic adjustments based on the information. The VPA is an expert advisor which provides you a step in that direction. 12.4.6 Enabling the Trace Logging for Director GUI Interface Enabling GUI logging can be a useful for troubleshooting GUI problems, however unlikely they may occur, which you may encounter while using VPA. Since this function requires a server reboot where TotalStorage Productivity Center for Disk is running, you may consider doing this prior to engaging in use of the VPA. Chapter 12. Using TotalStorage Productivity Center Performance Manager 481
  • 502. On the Windows platform, follow these steps: 1. Start → Run regedit.exe 2. Open the HKEY_LOCAL_MACHINE →SOFTWARE →Tivoli →Director →CurrentVersion file. 3. Modify the LogOutput. Set the value to be equal to 1. 4. Reboot the server The output log location from the instructions above is X:/program files/ibm/director/log (where X is the drive where the Director application was installed). The log file for the Director is com.tivoli.console.ConsoleLauncher.stderr. On the Linux platform, TWGRas.properties sets the output logging on. You need to remove the comment from the last line in the file (twg.sysout=1) and ensure that you have set TWG_DEBUG_CONSOLE as an environment variable. For example in bash: $export TWG_DEBUG_CONSOLE=true 12.4.7 Getting started In this section, we provide detailed steps of using VPA with pre-defined performance parameters (workload profile) you can utilize for advice in optimal volume placement in your environment. For detailed steps on creating customized workload profiles, you may refer to 12.4.8, “Creating and managing workload profiles” on page 508. To use VPA with customized workload profile, the major steps are: Create a data collection task in Performance Manager In order to utilize the VPA, you must first have a useful amount of performance data collected from the device you want to examine. Refer to “Performance Manager data collection” on page 429 for more detailed instructions regarding use of the Performance data collection feature of the Performance Manager. Schedule and run a successful performance data collection task It is important to have an adequate amount of historical to provide you a statistically relevant sampling population. Create or use a user-defined workload profile Use the Volume Performance Advisor to: – Add Devices – Specify Settings – Select workload profile (predefined or user defined) – View Profile Details – Choose Candidate Location – Verify Settings – Approve Recommendations or restart VPA process with different parameters) Workload profiles The basic VPA concept, and the storage administrator’s goal, is to balance the workload across all device components. This requires detailed ESS configuration information including all components (clusters, device adapters, logical subsystems, ranks, and volumes) 482 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 503. To express the workload represented by the new volumes, they are assigned a workload profile. A workload profile contains various performance attributes: I/O demand, in I/O operations per second per GB of volume size Average transfer size, in KBs per second Percentage mix of I/O - sequential or random, and read or write Cache utilization - percent of: cache hits for random reads, cache misses for random writes Peak activity time - the time period when the workload is most active You can create your own workload profile definitions in two ways By copying existing profiles, and editing their attributes By performing an analysis of existing volumes in the environment This second option is known as a Workload Analysis. You may select one or more existing volumes, and the historical performance data for these volumes retrieved, to determine their (average) performance behavior over time. Using VPA with pre-defined workload profile This section describes a VPA example using a default workload profile. The purpose of this section to help you familiarize for using VPA tool. Although, it is recommended to generate and use your customized workload profile after gathering performance data. The customized profile will be realistic in terms of your application performance requirements. The VPA provides five predefined (canned) Workload Profile definitions. They are: 1. OLTP Standard: for general Online Transaction Processing Environment (OLTP) 2. OLTP High: for higher demand OLTP applications 3. Data Warehouse: for data warehousing applications 4. Batch Sequential: for batch applications accessing data sequentially 5. Document Archival: for archival applications, write-once, read-infrequently Note: Online Transaction Processing (OLTP) is a type of program that facilitates and manages transaction-oriented applications. OLTP is frequently used for data entry and retrieval transactions in a number of industries, including banking, airlines, mail order, supermarkets, and manufacturers. Probably the most widely installed OLTP product is IBM's Customer Information Control System (CICS®). Launching VPA tool The steps to utilize a default workload profile to have the Volume Performance Advisor examine and advise you on volume placement are: 1. In the IBM Director Task pane, click Multiple Device Manager. 2. Click Manage Performance 3. Click Volume Performance Advisor Chapter 12. Using TotalStorage Productivity Center Performance Manager 483
  • 504. 4. You can choose two methods to launch VPA: a. “Drag and Drop” the VPA icon to the storage device to be examined (see Figure 12-69). Figure 12-69 “Drag and Drop” the VPA icon to the storage device 484 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 505. b. Select storage device → right-click the device → select Volume Performance Advisor (see Figure 12-70). Figure 12-70 Select ESS and right-click for VPA If a storage device is selected for the drag and drop step, that is not in the scope of the VPA, the following message will open (see Figure 12-71). Devices such as a CIMOM or an SNMP device will generate this error. Only ESS is supported at this time. Figure 12-71 Error launching VPA example ESS User Validation If this is the first time your are using VPA tool for the selected ESS device, then the ESS User Validation panel will display as shown in Figure 12-72 on page 486. Otherwise, if you have already validated the ESS user for VPA usage, then it will skip this panel and it will launch the VPA setting default panel as shown in Figure 12-77 on page 488. Chapter 12. Using TotalStorage Productivity Center Performance Manager 485
  • 506. Figure 12-72 ESS User validation screen example In the ESS User Validation panel, specify the user name, password, and port for each of the IBM TotalStorage Enterprise Storage Servers (ESSs) that you want to examine. During the initial setup of the VPA, on the ESS User Validation window, you need to first select the ESS (as shown in Figure 12-73 on page 487) and then input correct username, correct password and password verification. You must click Set after you have input the correct username, password, and password verification in the appropriate fields (see highlighted portion with circle in Figure 12-74 on page 487). When you click Set, the application will populate the data you input (masked) into the correct Device Information fields in the Device Information box (see Figure 12-75 on page 487). If you do not click Set, before selecting OK, the following error(s) will appear depending on what data needs to be entered. BWN005921E (ESS Specialist username has not been entered correctly or applied) BWN005922E (ESS Specialist password has not been entered correctly or applied) If you encounter these errors, ensure you have correctly input the values in the input fields in the lower part of the ESS user validation window and then retry by clicking OK. The ESS user validation window contains the following fields: – Devices table - Select an ESS from this table. It includes device IDs and device IP addresses of the ESS devices on which this task was dropped. – ESS Specialist username - Type a valid ESS Specialist user name and password for the selected ESS. Subsequent displays of the same information for this ESS show the user name and password that was entered. You can change the user name by entering a new user name in this field. – ESS Specialist password - Type a valid ESS Specialist password for the selected ESS. Any existing password entries are removed when you change the ESS user name. – Confirm password - Type the valid ESS Specialist password again exactly as you typed it in the password field. – ESS Specialist port - Type a valid ESS port number. The default is 80. – Set button - Click to set names, passwords, and ports without closing the panel. – Remove button - Click to remove the selected information. – Add button - Click to invoke the Add devices panel. 486 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 507. – OK button - Click to save the changes and close the panel. Figure 12-73 ESS User validation - select ESS Figure 12-74 Apply ESS Specialist user defined input Figure 12-75 Applied ESS Specialist user defined input Chapter 12. Using TotalStorage Productivity Center Performance Manager 487
  • 508. Click the OK button to save the changes and close the panel. The application will attempt to access the ESS storage device. The error message in Figure 12-76 can be indicative of use of an incorrect username or password for authentication. Additionally, If you have a firewall and are not adequately authenticating to the storage device, the error may appear. If this does occur, check to ensure you are using the correct username and password for the authentication and have firewall access and are properly authenticating to establish storage device connectivity. Figure 12-76 Authentication error example Configuring VPA settings for the ESS diskspace request After you have successfully completed the User Validation step, the VPA Settings window will open (see Figure 12-77). Figure 12-77 VPA Settings default panel 488 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 509. You use the Volume performance advisor - Settings window to identify your requirements for host attachment and the total amount of space that you need. You can also use this panel to specify volume number and size constraints, if any. We will begin with our example as shown in Figure 12-78. Figure 12-78 VPA settings for example Chapter 12. Using TotalStorage Productivity Center Performance Manager 489
  • 510. Here we describe the fields in this window: – Total space required (GB) - Type the total space required in gigabytes. The smallest allowed value is 0.1 GB. We requested 3 GB for our example. Note: You cannot exceed the volume space available for examination on the server(s) you select. To show the error, in this example we selected host Zombie and Total required space as 400 GB. We got the error shown in Figure 12-79. Action: Retry with different values and look at the server log for details. Solution(s): – Select a smaller maximum Total (volume) Space required GB and retry this step. – Select more hosts which will include adequate volume space for this task. – You may want to select the box entitled Consider volumes that have already been allocated but not assigned in the performance recommendation. Director log file enabling will generate logs for troubleshooting Director GUI components, including the PM coswearer. In this example, the file we reference is: com.tivoli.console.ConsoleLauncher.stderr. (com.tivoli.console.ConsoleLauncher.stdout is also useful) The sample log is shown in Figure 12-80. Figure 12-79 Error showing exceeded the space requested Figure 12-80 Director GUI console errorlog – Specify a volume size range button - Click the button to activate the field, then use the Minimum size (GB) spinner and the Maximum size (GB) spinner to specify the range. In this example, we selected 1 GB as minimum and 3 GB as maximum. 490 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 511. – Specify a volume quantity range button - Click the button to activate the field, then use the Minimum number spinner and the Maximum number spinner to specify the range. – Consider volumes that have already been allocated but not assigned to hosts in the performance recommendation. If you check this box, VPA will use these types of volumes in the volume performance examination process. When this box (Consider volumes...) is checked and you click Next, the VPA wizard will open the following warning window (see Figure 12-81). Figure 12-81 Consider volumes - warning window example Note: The BWN005996W message is a warning (W). You have selected to reuse unassigned existing volumes which could potentially cause data loss. Go Back to the VPA Settings window by clicking OK if you do not want to consider unassigned volumes. Click the Help button for more information. Explanation: The Volume Performance Advisor will assume that all currently unassigned volumes are not in use, and may recommend the reuse of these volumes. If any of these unassigned volumes are in use — for example, as replication targets or other data replication purposes — and these volumes are recommended for reuse, the result could be potential data loss. Action: Go back to the Settings window and unselect Consider volumes that have already been allocated but not assigned to hosts in the performance recommendation if you do not want to consider volumes that may potentially be used for other purposes. If you want to continue to consider unassigned volumes in your recommendations, then continue. – Host Attachments table - Select one or more hosts from this table. This table lists all hosts (by device ID) known to the ESS that you selected for this task. It is important to only choose hosts for volume consideration that are the same server type. It is also important to note that the VPA takes into consideration the maximum volume limitations of server type such as (Windows 256 volumes maximum) and AIX (approximately 4000 volumes). If you select a volume range above the server limit, VPA will display an error. In our example we used the host “Zombie”. – Next button - Click to invoke the Choose workload profile window. You use this window to select a workload profile from a list of existing profile templates. Chapter 12. Using TotalStorage Productivity Center Performance Manager 491
  • 512. 5. Click Next, after inputting your preferred parameters, and the Choose workload profile window will display (see Figure 12-82). Figure 12-82 VPA Choose workload profile window example Choosing a workload profile You can use the Choose workload profile window to select a workload profile from a list of existing profiles. The Volume performance advisor uses the workload profile and other performance information to advise you about where volumes should be created. For our example we have selected the OLTP Standard default profile type. – Workload profiles table - Select a profile from this table to view or modify. The table lists predefined or existing workload profile names and descriptions. Predefined workload profiles are shipped with Performance Manager. Workload profiles that you previously created, if any, are also listed. – Manage profiles button - Click to invoke the Manage workload profile panel. – Profile details button - Click to see details about the selected profile in the Profile details panel as shown in Figure 12-83 on page 493. Details include the following types of information: • Total I/O per second per GB • Random read cache hits • Sequential and random reads and writes • Start and end dates • Duration (days) 492 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 513. Note: You cannot modify the properties of the workload profile from this panel. The panel options are “greyed out” (inactive). You can make changes to a workload profile from Manage Profile → Create like panel. – Next button - Click to invoke the Choose candidate locations window. You can use this panel to select volume locations for the VPA to consider. Figure 12-83 Properties for OLTP Standard profile 6. After reviewing the properties for predefined workload profiles, you may select a workload profile from the table which closely resemble your workload profile requirements. For our scenario, we have selected the OLTP Standard workload name from the Choose workload profile window. We are going to use this workload profile for the LUN placement recommendations. – Name - Shows the default profile name. The following restrictions apply to the profile name. • The workload profile name must be between 1 to 64 characters. • Legal characters are A-Z, a-z, 0-9, “-”, “_”, “.”, and “:” • The first character cannot be “-” or “_”. Chapter 12. Using TotalStorage Productivity Center Performance Manager 493
  • 514. Spaces are not acceptable characters. – Description - Shows the description of workload profile. – Total I/O per second per GB - Shows the values for the selected workload profile Total I/O per second rate. – Average transfer size (KB) - Shows the values for the selected workload profile. – Caching information box - Shows the cache hits and destage percentages: • Random read cache hits Range from 1 - 100%. The default is 40%. • Random write destage Range from 1 - 100%. The default is 33%. – Read/Write information box - Shows the read and write values. The percentages for the four fields must equal 100% • Sequential reads - The default is 14%. • Sequential writes - The default is 23%. • Random reads - The default is 36%. • Random writes - The default is 32%. – Peak activity information box Since currently we are only viewing properties of an existing profile, the parameters for this box are not selectable. But as reference for subsequent usage, you may review this box. After you review properties for this box, you may click the Close button. While creating new profile, this box will allow you to input following parameters: • Use all available performance data radio button. You can select this option if you want to include all available performance data previously collected in consideration for this workload profile. • Use the specified peak activity period radio button. You can select this button as an alternate option (instead of using the Use all available performance data option) for consideration in this workload profile definition. • Time setting drop down menu. Select from the following options for the time setting you want to use for this workload profile. - Device time - Client time - Server time - GMT • Past days to analyze spinner. Use this (or manually enter the number) to select the number of days of historical information you want to consider for this workload profile analysis • Time Range drop down lists. Select the Start time and End time to consider using the appropriate fields. – Close button - Click to close the panel. You will be returned to the Choose workload profile window. 494 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 515. Choosing candidate locations Select the name of the profile you want to use from the VPA Choose workload profile window and then the Choose Candidate Locations window will open (see Figure 12-84). We chose our OTLP Standard workload profile for the VPA analysis. Figure 12-84 Choose candidate locations window You can use the Choose candidate locations page to select volume locations for the performance advisor to consider. You can choose to either include or exclude the selected locations for the advisor's consideration. The VPA uses historical performance information to advise you about where volumes should be created. The Choose candidate locations page is one of the panels the performance advisor uses to collect and evaluate the information. – Device list - Displays device IDs or names for each ESS on which the task was activated (each ESS on which you dropped the Volume advisor icon). – Component Type tree - When you select a device from the Device list, the selection tree opens on the left side of the panel. The ESS component levels are shown in the tree. The following objects might be included: • ESS • cluster • device adapter Chapter 12. Using TotalStorage Productivity Center Performance Manager 495
  • 516. array • disk group The component level names are followed by information about the capacity and the disk utilization of the component level. For example, we used System component level. It shows Component ID - 2105-F20-16603,Type- System, Description - 2105-F20-16603-IBM, Available capacity - 311GB, Utilization - Low.(see Figure 12-84 on page 495). Tip: You can select the different ESS component types and the VPA will reconsider the volume placement advise based on that particular select. To familiarize yourself with the options, select each component in turn to determine which component type centric advise you prefer before proceeding to the next step. Select a component type from the tree to display a list of the available volumes for that component in the Candidates table (see Figure 12-84 on page 495). We chose system for this example. It represents entire ESS system in this case. Click Add button to add the component selected in the Candidates table to the Selected candidates table. See Figure 12-85. It shows Selected candidate as 2105-F20-16603. Figure 12-85 VPA Chose candidate locations Component Type tree example (system) 496 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 517. Verify settings for VPA Click the Next button to invoke the Verify Settings window (see Figure 12-86). Figure 12-86 VPA Verify settings window example You can use the Verify settings panel to verify the volume settings that you specified in the previous panels of the VPA. Chapter 12. Using TotalStorage Productivity Center Performance Manager 497
  • 518. Approve recommendations After you have successfully completed the Verify Settings step, click the Next button, and the Approve Recommendations window opens (see Figure 12-87). Figure 12-87 VPA Recommendations window example You use the Recommendations window to first view the recommendations from the VPA and then to create new volumes based on the recommendations. In this example, VPA also recommends the location of volume as 16603:2:4:1:1700 in the Component ID column. This means recommended volume location is at ESS with ID 16603, Cluster 2, Device Adapter 4, Array 1 and volume ID1700. With this information, it is also possible to create volume manually via ESS specialist Browser interface or use VPA to create the same. In the Recommendations window of the wizard, you can choose whether the recommendations are to be implemented, and whether to loop around for another set of recommendations. At this time, you have two options (other than to cancel the operation). Make your final selection to Finish or return to the VPA for further recommendations. a. If you do not want to assign the volumes using the current VPA advice, or want the VPA to make another recommendation, check only the Make Additional Recommendations box. 498 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 519. b. If you want to use the current VPA recommendation and make additional volume assistants at this time, select both the Implement Recommendations and Make Additional Recommendations check boxes. If you choose both options, you must first wait until the current set of volume recommendations are created, or created and assigned, before continuing. If you make this type of selection, a secondary window will appear which runs synchronously within the VPA. Tip: Stay in the same VPA session if you are going to implement volumes and add new volumes. This will enable VPA to provide advice for your current selections, checking for previous assignments, and verifying that no other VPA is processing the same volumes. VPA loopback after Implement Recommendations selected In the following example, we show the results of a VPA session. 1. In this example, we decided to Implement recommendations and also Make additional recommendations. Hence we selected both check boxes (see Figure 12-88). Figure 12-88 VPA Recommendation selected check box Chapter 12. Using TotalStorage Productivity Center Performance Manager 499
  • 520. 2. Click the Continue button to proceed with VPA advice (see Figure 12-88 on page 499). Figure 12-89 VPA results - in progress panel 3. In Figure 12-89, we can see that the volumes are being created on the server we selected previously. This process takes a little time, so be patient. 4. Figure 12-90 indicates that the volume creation and assigning to ESS has completed. Be patient and momentarily, the VPA loopback sequence will continue. Figure 12-90 VPA final results 500 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 521. 5. After the volume creation step has successfully completed, the following Settings window will again open so that you may add more volumes (see Figure 12-91). Figure 12-91 VPA settings default Chapter 12. Using TotalStorage Productivity Center Performance Manager 501
  • 522. For the additional recommendations, we decided to use same server. But, we specified the Volume quantity range instead of Volume size range for the requested space of 2GB. See Figure 12-92. Figure 12-92 VPA additional space request 502 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 523. After clicking Next, the Choose Profile Panels opens. We selected the same profile as before: OLTP Standard. See Figure 12-93. Figure 12-93 Choose Profile Chapter 12. Using TotalStorage Productivity Center Performance Manager 503
  • 524. After clicking Next, the Choose candidate locations panel opens. We selected Cluster from the Component Type drop-down list. See Figure 12-94. Figure 12-94 Choose candidate location The Component Type Cluster shows Component ID as 2105-F20-16603:2, Types as Cluster, Descriptor as 2, Available capacity as 308GB and Utilization as Low. This indicates that VPA plans to provision additional capacity on this Cluster 2 of ESS. 504 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 525. After clicking the Add button, Cluster 2 is a selected candidate for new volume. See Figure 12-95. Figure 12-95 Choose candidate location - select cluster Chapter 12. Using TotalStorage Productivity Center Performance Manager 505
  • 526. Upon clicking Next, the Verify settings panel opens as shown in Figure 12-96. Figure 12-96 Verify settings 506 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 527. After verifying settings and clicking Next, VPA recommendations window opens. See Figure 12-97. Figure 12-97 VPA recommendations Chapter 12. Using TotalStorage Productivity Center Performance Manager 507
  • 528. Since the purpose of this example is to show our readers the VPA looping only, we decided to un-check both check boxes for Implement Recommendations and Make additional recommendations. Clicking Finish completed the VPA example (Figure 12-98). Figure 12-98 Finish VPA panel 12.4.8 Creating and managing workload profiles The VPA makes decisions based on characteristics of the workload profile to decide volume placement recommendations. VPA decisions will not be accurate if an improper workload profile is chosen, and it may cause future performance issues for application. It is a must to have a valid and appropriate workload profile created prior to using VPA for any application. Therefore, creating and managing workload profile is an important task, which involves regular upkeep of workload profiles for each application disk I/O served by ESS. Figure 12-99 on page 509 shows a typical sequence of managing workload profiles. 508 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 529. D eterm in e I/O w o rk lo a d typ e o f ta rg e t C re a te I/O p erfo rm a n c e a p p lic a tio n da ta c o lle c tio n ta s k M A N o m atch w ith C lo s e m a tc h w ith N p re-d efined p ro file pre d e fine d p ro file A G C C o o s e e P re -dfin e d d h h o s P re -d e e fin e In itia te I/O p e rfo rm a n c e d a ta I o r r C retete lik e o C re a a lik e c o lle c tio n c ove rin g p e a k lo ad tim e s an d g a th e r s u fficie n t N p ro file p ro file s a m p le s G S p e c ify tim e p e rio d o f P p ea k a c tiv ity R C h o o s e C reate p ro file O F I L V a lid ate an alysis res u lts If res u lts n o t a c ce p tab le , re-v a lid a te d a ta c o lle c tio n p a ra m e te rs E S R es u lts ac c ep ted S a ve P ro file Figure 12-99 Typical sequence for managing workload profiles Before using VPA for any additional disk space requirement for an application, you will need to: determine typical I/O workload type of that application and; have performance data collected which covers peak load time periods You will need to determine the broad category in selected I/O workload fits in, e.g. whether it is OLTP high, OLTP standard, Data Warehouse, Batch sequential or Document archival. This is shown as highlighted box in the diagram. The TotalStorage Productivity Center for Disk provides pre-defined profiles for these workload types and it allows you to create additional similar profiles by choosing Create like profiles. If do not find any match with pre-defined profiles, then you may prefer to Create a new profile. While choosing Create like or Create profiles, you will also need to specify historical performance data samples covering peak load activity time period. Optionally, you may specify additional I/O parameters. Upon submitting the Create or Create like profile, the performance analysis will performed and results will be displayed. Depending upon the outcome of the results, you may need to re-validate the parameters for data collection task and ensure that peak load samples are taken correctly. If the results are acceptable, you may Save the profile. This profile can be referenced for future usage by VPA. In “Choosing workload profiles” on page 510, we cover step-by-step tasks using an example. Chapter 12. Using TotalStorage Productivity Center Performance Manager 509
  • 530. Choosing workload profiles You can use Performance Manager to select a predefined workload profile or to create a new workload profile based on historical performance data or on an existing workload profile. Performance Manager uses these profiles to create a performance recommendation for volume allocation on an IBM storage server. You can also use a set of Performance Manager panels to create and manage the workload profiles. There are three methods you can use to choose a workload profile as shown in Figure 12-100. Figure 12-100 Choosing workload profiles Note: Using a predefined profile does not require pre-existing performance data, but the other two methods require historical performance data from the target storage device. You can launch the workload profiles management tool using the drag and drop method from the IBM Director console GUI interface. Drag the Manage Workload Profile task to the target storage device as shown in Figure 12-101. Figure 12-101 Launch Manage Workload Profile 510 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 531. If you are using Manage Workload Profile or VPA tool for first time of the selected ESS device, then you will need to authorize ESS user validation. This has been described in detail in “ESS User Validation” on page 485. The ESS User Validation is the same for VPA and Manage Workload Profile tools. After the successful ESS User validation, the Manage Workload Profile panel will be opened as shown in Figure 12-102. Figure 12-102 Manage workload profiles You can create or manage a workload profile using the following three methods: 1. Selecting a predefined workload profile Several predefined workloads are shipped with Performance Manager. You can use the Choose workload profile panel to select the predefined workload profile that most closely matches your storage allocation needs. The default profiles shipped with Performance Manager are shown in Figure 12-103. Figure 12-103 Default workload profiles You can select the properties panel of the respective pre-defined profile to verify the profile details. A sample profile for OLTP Standard is shown in Figure 12-83 on page 493. Chapter 12. Using TotalStorage Productivity Center Performance Manager 511
  • 532. 2. Creating a workload profile similar to another profile You can use the Create like panel to modify the details of a selected workload profile.You can then save the changes and assign a new name to create a new workload profile from the existing profile. To Create like a particular profile, these are the tasks involved: a. Create a performance data collection task for target storage device: You may need to include multiple storage devices based on your profile requirements for the application. b. Schedule data collection task: You may need to ensure that a data collection task runs over a sufficient period of time, which truly represents a typical I/O load of the respective application. The key is to have sufficient historical data. Tip: Best practice is to schedule frequency of a performance data collection task in such a way that it covers peak load periods of I/O activity and it has at least a few samples of peak loads. The number of samples depends on I/O characteristics of the application. c. Determine the closest workload profile match: Determine whether new workload profile matches w.r.t existing or pre-defined profiles. Note that it may not be the exact fit, but should be of somewhat similar type. d. Create the new similar profile: Using the Manage Workload Profile task, create new profile. You will need to select appropriate time period for historical data, which you have collected earlier. In our example, we created similar profile using Batch Sequential pre-defined profile. First, we select Batch Sequential profile and click Create like button as shown in Figure 12-104. Figure 12-104 Manage workload profile - create like 512 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 533. The Properties panel for Batch Sequential is opened, as shown in Figure 12-105. Figure 12-105 Properties for Batch sequential profile We changed the following values for our new profile: Name: ITSO_Batch_Daily Description: For ITSO batch applications Average transfer size: 20KB Sequential reads: 65% Random reads: 10% Peak Activity information: We used time period as past 24 days from 12AM to 11PM. Chapter 12. Using TotalStorage Productivity Center Performance Manager 513
  • 534. We saved our new profile (see Figure 12-106). Figure 12-106 New Profile 514 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 535. This new profile - ITSO_Daily_batch is now available in Manage workload profile panel as shown in Figure 12-107. This profile can now be used for VPA analysis. This completes our example. Figure 12-107 Manage profile panel with new profile 3. Creating a new workload profile from historical data You can use the Manage workload profile panel to create a workload profile based on historical data about existing volumes. You can select one or more volumes as the base for the new workload profile. You can then assign a name to the workload profile, optionally provide a description, and finally create the new profile. To create a new workload profile, click the Create button as shown in Figure 12-108. Figure 12-108 Create a new workload profile Chapter 12. Using TotalStorage Productivity Center Performance Manager 515
  • 536. This will launch a new panel for Creating workload profile as shown in Figure 12-109. At this stage, you will need to specify the volumes for performance data analysis. In our example, we selected all volumes. For selecting multiple volumes but not all, click the first volume, hold the Shift key and click the last volume in the list. After all the required volumes are selected (shown as dark blue), click the Add button. See Figure 12-109. Note: The ESS volumes you specify should be representative of I/O behavior of the application, for which you are planning to allocate space using the VPA tool. Figure 12-109 Create new profile and add volumes 516 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 537. Upon clicking the Add button, all the selected volumes will be moved to the selected volumes box as shown in Figure 12-110. Figure 12-110 Selected volumes and performance period for new workload profile In the Peak activity information box, you will need to specify an activity sample period for Volume performance analysis. You can select the option Use all available performance data or select Use the specified peak activity period. Based on your application peak I/O behavior, you may specify the sample period with Start date, Duration in days, and Start / End time. For the time setting, you can choose the drop-down box: Device time, or Client time, or Server time, or GMT Chapter 12. Using TotalStorage Productivity Center Performance Manager 517
  • 538. After you have entered all the fields, click Next. You will see the Create workload profile - Review panel as shown in Figure 12-111. Figure 12-111 Review new workload profile parameters You can specify a Name for the new workload profile and a Description. You may put in detailed description that covers: The application name for which the profile is being created What application I/O activity is represented by the peak activity sample When it was created Who created it (optional) Any other relevant information your organization requires In our example, we created profile named New_ITSO_app1_profile. At this point you may click Finish. At this point, the TotalStorage Productivity Center for Disk will begin Volume performance analysis based on the parameters you have provided. This process may take some time depending upon number of volumes and sampling time period. Hence, be patient. Finally, it will show the outcome of the analysis. 518 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 539. In our example, we got the results notification message as shown in Figure 12-112. Analysis yielded that results are not statistically significant, as shown message: BWN005965E: Analysis results are not significant. This may indicate that: There is not enough I/O activity on selected volumes, OR The time period chosen for sampling is not correct, OR Correct volumes were not chosen You have an option to Save or Discard the profile. We decided to save the profile (Figure 12-113). Figure 12-112 Results for Create Profile Upon saving the profile, it is now listed in the Manage workload profile panel as shown in Figure 12-113. Figure 12-113 Manage workload profile with new saved profile – The new profile can now be referenced by VPA for future usage. Chapter 12. Using TotalStorage Productivity Center Performance Manager 519
  • 540. 520 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 541. 13 Chapter 13. Using TotalStorage Productivity Center for Data This chapter introduces you to the TotalStorage Productivity Center for Data and discusses the available functions. The information in this chapter provides the information necessary to accomplish the following tasks: Discover and monitor storage assets enterprise-wide Report on enterprise-wide assets, files and filesystems, databases, users, and applications Provide alerts (set by the user) on issues such as capacity problems, policy violations, etc. Support chargebacks by usage or capacity © Copyright IBM Corp. 2005. All rights reserved. 521
  • 542. 13.1 TotalStorage Productivity Center for Data overview This section describes the business purpose of TotalStorage Productivity Center for Data (Data Manager), its architecture, components, and supported platforms. 13.1.1 Business purpose of TotalStorage Productivity Center for Data The primary business purpose of TotalStorage Productivity Center for Data is to help the storage administrator keep data available to applications so the company can produce revenue. Through monitoring and reporting, TotalStorage Productivity Center for Data helps the storage administrator prevent outages in the storage infrastructure. Armed with timely information, the storage administrator can take action to keep storage and data available to the application. TotalStorage Productivity Center for Data also helps to make the most efficient use of storage budgets by allowing administrators to use their existing storage more efficiently, and more accurately predict future storage growth. 13.1.2 Components of TotalStorage Productivity Center for Data At a high level, the major components of TotalStorage Productivity Center for Data are: Server, running on a managing server, with access to a database repository Agents, running on one or more Managed Devices Clients (using either a locally installed GUI, or a browser-based Web GUI) which users and administrators use to perform storage monitoring tasks. Data Manager Server The Data Manager Server: Controls the discovery, reporting, and Alert functions Stores all data in the central repository Issues commands to Agents for jobs (either scheduled or ad hoc) Receives requests from the user interface clients for information, and retrieves the requested information from the central data repository. Extends filesystems automatically Reports on the IBM TotalStorage Enterprise Storage Server (ESS) and can also provide LUN provisioning An RDBMS (either locally or remote) manages the repository of data collected from the Agents, and the reporting and monitoring capabilities defined by the users. WWW Server The Web Server is optional, and handles communications to allow remote Web access to the Server. The WWW Server can run on the same physical server as the Data Manager Server. Data Agent (on a Managed System) The Agent runs Probes and Scans, collects storage-related information from the managed system, and forwards it to the Manager to be stored in the database repository, and acted on if so defined. An Agent is required for every host system to be monitored, with the exception of NetWare and NAS devices. 522 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 543. Novell NetWare and NAS devices do not currently support locally installed Agents - they are managed through an Agent installed on a machine that uses (accesses) the NetWare or NAS device. The Agent will discover information on the volumes or filesystems that are accessible to the Agent’s host. The Agents are quite lightweight. Agents listen for commands from the host, and then perform a Probe (against the operating system), and/or a Scan (against selected filesystems). Normal operations might see one scheduled Scan per day or week, plus various ad hoc Scans. Scans and Probes are discussed later in this chapter. Clients (direct-connected and Web connected) Direct-connect Clients have the GUI to the Server installed locally. They communicate directly to the Manager to perform administration, monitoring, and reporting. The Manager retrieves information requested by the Clients from the database repository. Web-connect clients use the WWW Server to access the user interface through a Web browser. The Java administrative applet is downloaded to the Web Client machine and presents the same user interface that Direct-connect Clients see. 13.1.3 Security considerations TotalStorage Productivity Center for Data has two security levels: non-administrative users and administrators: Non-administrator users can: – View the data collected by TotalStorage Productivity Center for Data – Create, generate, and save reports Administrators can: – Create, modify, and schedule Pings, Probes, and Scans – Create, generate, and save reports – Perform administrative tasks and customize the TotalStorage Productivity Center for Data environment – Create Groups, Profiles, Quotas, and Constraints – Set Alerts 13.2 Functions of TotalStorage Productivity Center for Data An overview of the functions of TotalStorage Productivity Center for Data is provided in this section and explored in detail in the rest of the chapter. TotalStorage Productivity Center for Data is designed to be easy to use, quick to install, with flexible and powerful configuration. The main functions of the product are: Automatically discover and monitor disks, partitions, shared directories, and servers Reporting to track asset usage and availability – Physical inventory - disks, partitions, servers – Logical inventory - filesystems and files, databases and tables – Forecasting demand versus capacity – Standardized and customized reports, on-demand and batched – Various user-defined levels of grouping – From summary level down to individual file for userID granularity Alerts - execute scripts, e-mail, SNMP traps, event log Quotas Chargeback Chapter 13. Using TotalStorage Productivity Center for Data 523
  • 544. 13.2.1 Basic menu displays Figure 13-1 shows the main menu for TotalStorage Productivity Center for Data. You can see that the Agents configured show under the Agents entry. This display thus shows a quick summary of the state of each Agent. There are several icons to indicate the status of the Agents. Green circle - Agent is communicating with the Server Red crossed circle - Agent is down. Red triangle - Agent on that system is not reachable Red crossed square - Agent was connected, but currently there is an update for TotalStorage Productivity Center for Data agent running. Figure 13-1 Agent summary 524 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 545. Figure 13-2 shows the TotalStorage Productivity Center for Data dashboard. This is the default right-hand pane display when you start TotalStorage Productivity Center for Data and shows a quick summary of the overall health of the storage environment. It can quickly show you potential problem areas for further investigation. Figure 13-2 TotalStorage Productivity Center for Data - dashboard The dashboard contains four viewable areas, which cycle among seven pre-defined sets of panels. To cycle, use the Cycle Panels button. Use the Refresh button to update the display. Enterprise-wide summary The Enterprise-wide Summary panel shows statistics accumulated from all the Agents. The statistics are: Total filesystem capacity available Total filesystem capacity used Total filesystem free capacity Total allocated and unallocated disk space Total disk space unallocated to filesystems Total LUN capacity Total usable LUN capacity Total number of monitored servers Total number of unmonitored servers Total number of storage subsystems Total number of users Total number of disks Total number of LUNs Total number of filesystems Total number of directories Total number of files Chapter 13. Using TotalStorage Productivity Center for Data 525
  • 546. Filesystem Used Space This panel displays a pie chart showing the distribution of used and free space in all filesystems. Different chart types can be selected here. This provides a quick snapshot of your filesystem space utilization efficiency. Users Consuming the Most Space By default this panel displays a bar chart (different chart types can be selected) of the users who are using the largest amount of filesystem space. Monitored Server Summary This panel shows a table of total disk filesystem capacity for the monitored servers sorted by OS type. Filesystems with Least Free Space Percentage This panel shows a table of the most full filesystems, including the percent of space free, the total filesystem capacity, and the filesystem mount point. Users Consuming the Most Space Report This panel shows the same information as the Users Consuming the Most Space panel, but in a table format. Alerts Pending This panel shows active Alerts that have been triggered but are still pending. 13.2.2 Discover and monitor Agents, disks, filesystems, and databases TotalStorage Productivity Center for Data uses three methods to discover information about the assets in the storage environment: Pings, Probes, and Scans. These are typically set up to run automatically as scheduled tasks. You can define different Ping, Probe, and Scan jobs to run against different Agents or groups of Agents (for example, to run a regular Probe of all Windows systems) according to your particular requirements. Pings A Ping is a standard ICMP Ping which checks registered Agents for availability. If an Agent does not respond to a Ping (or a pre-defined number of Pings) you can set up an Alert to take some action. The actions could be one, any, or all of: SNMP trap TEC Event Notification at login Entry in the Windows event log Run a script Send e-mail to a specified user(s) 526 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 547. Pings are used to generate Availability Reports, which list the percentage of times a computer has responded to the Ping. An example of an Availability Report for Ping is shown in Figure 13-3. Availability Reports are discussed in detail in 13.11.3, “Availability Reporting” on page 604. Figure 13-3 Availability Report - Ping Probes Probes are used to gather information about the assets and system resources of monitored servers, such as processor count and speed, memory size, disk count and size, filesystems, etc. The data collected by the Probe process is used in the Asset Reports described in 13.11.1, “Asset Reporting” on page 595. Figure 13-4 shows an Asset report for detected disks. Figure 13-4 Asset Report of discovered disks Chapter 13. Using TotalStorage Productivity Center for Data 527
  • 548. Figure 13-5 shows an Asset Report for detected database tablespaces. Figure 13-5 Asset Report of database tablespaces Scans The Scan process is used to gather statistics about usage and trends of the server storage. Data collected by the Scan jobs are tailored by Profiles. Results of Scan jobs are stored in the enterprise repository. This data supplies the data for the Capacity, Usage, Usage Violations, and Backup Reporting functions. These reports can be scheduled to run regularly, or they can be run ad hoc by the administrator. Profiles limit the scanning according to the parameters specified in the Profile. Profiles are used in Scan jobs to specify what file patterns will be scanned, what attributes will be gathered, what summary view will be available in reports and the retention period for the statistics. TotalStorage Productivity Center for Data supplies a number of default Profiles which can be used, or additional Profiles can be defined. Table 13-1 on page 547 shows the default Profiles provided. Some of these include: Largest files - Gathers statistics on the largest files Largest directories - Gathers statistics on the largest directories Most at risk - Gathers statistics on the files that have been modified the longest time ago and have not been backed up since modified (Windows Agents only) 528 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 549. Figure 13-6 shows a sample of a report produced from data collected in Scans. Figure 13-6 Summary View - by filesystem, disk space used and disk space free This report shows a list of the filesystems on each Agent, the amount of space used in each, expressed in bytes and as a percentage, the amount of free space, and the total capacity available in the filesystem. 13.2.3 Reporting Reporting in TotalStorage Productivity Center for Data is very powerful, with over 300 pre-defined views, and the capability to customize those standard views, save the custom report, and add it to your menu for scheduled or ad hoc reports. You can also create your own individual reports according to particular needs and set them to run as needed, or in batch (regularly). Reports can be produced in table format for a variety of charting (graph) views. You can export reports to CSV or HTML formats for external usage. Reports are generated against data already in the repository. A common practice is to schedule Scans and Probes just before running reports. Reporting can be done at almost any level in the system, from the enterprise down to a specific entity and any level in between. Figure 13-6 shows a high-level summary report. Or, you can drill down to something very specific. Figure 13-7 is an example of a lower-level report, where the administrator has focused on a particular Agent, KANAGA, to look at a particular disk on a particular controller. Chapter 13. Using TotalStorage Productivity Center for Data 529
  • 550. Figure 13-7 Asset Report - KANAGA assets Reports can be produced either system-wide or grouped into views, such as by computer, or OS type. Restriction: Currently, there is a maximum of 32,767 (216 -1) rows per report. Therefore, you cannot produce a report to list all the .HTM files in a directory containing a million files. However, you can (and it would be more productive to do so) produce a report of the 20 largest files in the directory, or the 20 oldest files, for example. TotalStorage Productivity Center for Data allows you to group information about similar entities (disk, filesystems, etc.) from different servers or business units into a summary report, so that business and technology administrators can manage an enterprise infrastructure. Or, you can summarize information from a specific server - the flexibility and choice of configuration is entirely up to the administrator. You can report as at a point in time, or produce a historical report, showing storage growth trends over time. Reporting lets you track actual demand for disk over time, and then use this information to forecast future demand for the next quarter, two quarters, year, etc. Figure 13-8 is an example of a historical report, showing a graph of the number of files on the C drive on the Agent KANAGA. 530 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 551. Figure 13-8 Historical report of filesystem utilization TotalStorage Productivity Center for Data has three basic types of reports: Computers and filesystems Databases Chargeback Reporting categories Major reporting categories for filesystems and databases are: Asset Reporting uses the data collected Probes to build a hardware inventory of the storage assets. You can then navigate through a hierarchical view of the assets by drilling down through computers, controllers, disks, filesystems, directories, and exports. For database reporting, information on instances, databases, tables, and data files is presented for reporting. Storage Subsystems Reporting provides information showing storage capacity at a computer, filesystem, storage subsystem, LUN, and disk level. These reports also enable you to view the relationships among the components of a storage subsystem. For a list of supported devices, see <table>. Availability Reporting shows responses to Ping jobs, as well as computer uptime. Capacity Reporting shows how much storage capacity is installed, how much of the installed capacity is being used, and how much is available for future growth. Reporting is done by disk and filesystem, and for databases, by database. Usage Reporting shows the usage and growth of storage consumption, grouped by filesystem, and computers, individual users, or enterprise-wide. Usage Violation Reporting shows violations to the corporate storage usage policies, as defined through TotalStorage Productivity Center for Data. Violations are either of Quota (defining how much storage a user or group of users is allowed) or Constraint (defining which file types, owners and file sizes are allowed on a computer or storage entity). You can define what action should be taken when a violation is detected - for example, SNMP trap, e-mail, or running a user-written script. Backup Reporting identifies files which are at risk because they have not been backed up. Chapter 13. Using TotalStorage Productivity Center for Data 531
  • 552. Reporting on the Web It is easy to customize Tivoli Storage Resource Manager to set up a reports Web site, so that anyone in the organization can view selected reports through their browser. 13.16, “Setting up a reports Web site” on page 698 explains how to do this. Figure 13-9 <change> shows an example of a simple Web site to view TotalStorage Productivity Center for Data reports. Figure 13-9 TotalStorage Productivity Center for Data Reports on the Web 13.2.4 Alerts An Alert defines an action to be performed if a particular event occurs or condition is found. Alerts can be set on physical objects (computers and disks) or a logical objects (filesystems, directories, users, databases, and OS user groups). Alerts can tell you, for instance, if a disk has a lot of recent defects, or if a filesystem or database is approaching capacity. Alerts on computers and disks come from the output of Probe jobs and are generated for each object that meets the triggering condition. If you have specified a triggered action (running a script, sending an e-mail, etc.) then that action will be performed if the condition is met. Alerts on filesystems, directories, users, and OS user groups come from the combined output of a Probe and a Scan. Again, if you have specified an action, that action will be performed if the condition is met. An Alert will register in the Alert log, plus you can also define one, some, or all of the following actions to be performed in addition: Send an e-mail indicating the nature of the Alert. Run a specific script with relevant parameters supplied from the content of the Alert. Make an entry into the Windows event log. Pop up next time the user logs in to TotalStorage Productivity Center for Data. Send an SNMP trap. Log a TEC event Refer to 13.4, “OS Alerts” on page 555 for details on alerts. 532 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 553. 13.2.5 Chargeback: Charging for storage usage TotalStorage Productivity Center for Data provides the ability to produce Chargeback information for storage usage. The following items can have charges allocated against them: Operating system storage by user Operating system disk capacity by computer Storage usage by database user Total size by database tablespace TotalStorage Productivity Center for Data can directly produce an invoice or create a file in CIMS format. CIMS is a set of resource accounting tools that allow you to track, manage, allocate, and charge for IT resources and costs. For more information on CIMS see the Web site: http://www.cims.com Chargeback is a very powerful tool for raising the awareness within the organization of the cost of storage, and the need to have the appropriate tools and processes in place to manage storage effectively and efficiently. Refer to 13.17, “Charging for storage usage” on page 700 for more details on Chargebacks. 13.3 OS Monitoring The Monitoring features of TotalStorage Productivity Center for Data enable you to run regularly scheduled or ad hoc data collection jobs. These jobs gather statistics about the storage assets and their availability and their usage within your enterprise, and make the collected data available for reporting. This section gives a quick overview of the monitoring jobs, and explains how they work through practical examples. Reporting on the collected data is explained in “Data Manager reporting capabilities” on page 592. Chapter 13. Using TotalStorage Productivity Center for Data 533
  • 554. 13.3.1 Navigation tree Figure 13-10 shows the complete navigation tree for OS Monitoring which includes Groups, Discovery, Pings, Probes, Scans, and Profiles. Figure 13-10 OS Monitoring tree Except for Discovery, you can create multiple definitions for each of those monitoring features of TotalStorage Productivity Center for Data. To create a new definition, right-click the feature and select Create <feature>. Figure 13-11 shows how to create a new Scan job. Figure 13-11 Create Scan job creation 534 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 555. Once saved, any definition within TotalStorage Productivity Center for Data can be updated by clicking the object. This will put you in Edit mode. Save your changes by clicking the floppy disk icon in the top menu bar. Discovery, Pings, Probes, and Scan menus contain jobs that can run on a scheduled basis or ad hoc. To execute a job immediately, right-click the job then select Run Now (see Figure 13-12). Each execution of a job creates a time-stamped output that can be displayed by expanding the tree under the job (you may need to right-click the job and select Refresh Job List). Figure 13-12 OS Monitoring - Jobs list The color of the job output represents the job status: Green - Successful run Brown - Warnings occurred during the run Red - Errors occurred during the run Blue - Running jobs To view the output of a job, double click the job. Groups and Profiles are definitions that may be used by other jobs - they do not produce an output in themselves. As shown in Figure 13-12, all objects created within Data Manager are prefixed with the user ID of the creator. Default definitions, created during product installation, are prefixed with TPCUser.Default. Groups, Discovery, Probes, Scans, and Profiles are explained in the following sections. 13.3.2 Groups Before defining monitoring and management jobs, it may be useful to group your resources so you can limit the scope of monitoring or data collection. Chapter 13. Using TotalStorage Productivity Center for Data 535
  • 556. Computer Groups Computer Groups allow you to target management jobs on specific computers based on your own criteria. Some criteria you might consider for grouping computers are platform type, application type, database type, and environment type (for example, test or production). Our lab environment contains Windows 2000 servers. In order to target specific servers for monitoring based on OS and/or database type, we will defined the following groups: Windows Systems Windows DB Systems To create the first group, expand Data Manager → Monitoring → Groups → Computer, right-click Computer and select Create Computer Group. Our first group will contain all Windows systems as shown in Figure 13-13. To add or remove a host from the group, highlight it in either the Available or Current Selections panel and use the arrow buttons. You can also enter a meaningful description in the field. Figure 13-13 Computer Group definition To save the new Group, click the floppy disk icon in the menu bar, and enter the Group name in the confirmation box shown in Figure 13-14. Figure 13-14 Save a new Computer Group 536 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 557. We created the other group using the same process, and named it Windows DB Systems. Important: To avoid redundant data collection, a computer can belong to only one Group at a time. If you add a system which is already in a Group, to a second Group, it will automatically be removed from the first Group. Figure 13-15 shows the final Group configuration, with the members of the Windows Systems group. Figure 13-15 Final Computers Group definitions Note: The default group TPCUser.Default Computer Group contains all servers that have been discovered, but not yet assigned to a Group. Filesystem Groups Filesystem Groups are used to associate together filesystems from different computers that have some commonality. You can then use this group definition to focus the Scan and the Alert processes to those filesystems. To create a Filesystem Group, you have to select explicitly each filesystem for each computer you want to include in the group. There is no way to do a grouped selection, e.g. / (root) filesystem for all UNIX servers or C: for all Windows platforms. Note: As for computers, a filesystem can belong to only one Group. Chapter 13. Using TotalStorage Productivity Center for Data 537
  • 558. Directory Groups Use Directory Groups to group together directories to which you want to apply the same storage management rules. Figure 13-16 shows the Directory Group definition screen by going to Data Manager → Monitoring → Groups → Directory and right-clicking Directory and selecting Create Directory Group. Figure 13-16 Directory group definition The Directory Group definition has two views for directory selection: Use directories by computer to specify several directories for one computer. Use computers by directory to specify one directory for several computers. The button on the bottom of the screen toggles between New computer and New directory depending on the view you select. 538 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 559. We will define one Directory Group with a DB2 directory for a specific computer (Colorado). To define the Group: 1. Select directories by computer. 2. Click New computer. 3. Select colorado from the pull-down Computer field. 4. Enter C:DB2NODE0000 in the Directories field and click Add (see Figure 13-17). Figure 13-17 Directories for computer configuration 5. Click OK. 6. Save the group as DB2 Node. Figure 13-18 shows our final Groups configuration and details of the OracleArchive Group. Figure 13-18 Final Directories Group definition Chapter 13. Using TotalStorage Productivity Center for Data 539
  • 560. User Groups You can define Groups made up of selected user IDs. These groupings will enable you to easily define and focus storage management rules such as scanning and Constraints on the defined IDs. Note: You can include in a User Group only user IDs defined on the discovered hosts, which have files belonging to them. Note: As with computers, a user can be defined in only one Group. OS User Group Groups You can define Groups consisting of operating system user groups such as Administrators for Windows or adm for UNIX. To define a Group consisting of user groups, select OS User Group from the Groups entry on the left hand panel. Note: As for users, an OS User Group will be added to the list of available Groups only when a Scan job finds at least one file owned by a user belonging to that Group. Note: As with users, an OS User Group can belong to only one Group at a time. 13.3.3 Discovery The Discovery process is used to discover new computers within your enterprise that have not yet been monitored by Data Manager. The discovery process will: Request a list of Windows systems from the Windows Domain Controller Contact, through SNMP, all NAS filers and check if they are registered in the nas.config file Discover all NetWare servers in the NetWare trees reported by Agents Search UNIX Agents’ mount tables, looking for remote filesystems and discover NAS filers More details of NAS and NetWare discovery are given in the manual IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886. Use the path Data Manager → Monitoring → Discovery to change the settings of the Discovery job. The following options are available. When to run tab The initial tab, When to Run (Figure 13-19) is used to modify the scheduling settings. You can specify to execute the Discovery: Now - Run once when the job is saved. Once - at a specified time in the future Repeatedly - Choose the frequency in minutes, hours, days, weeks, or months. You can limit the run to specific days of the week. 540 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 561. Figure 13-19 Discovery When to Run options Alert tab The second tab, Alert, enables you to be notified when a new computer is discovered. See 13.4, “OS Alerts” on page 555 for more details on the Alerting process. Options tab The third tab, Options (Figure 13-20) sets the discovery runtime properties. Figure 13-20 Discovery job options Chapter 13. Using TotalStorage Productivity Center for Data 541
  • 562. Uncheck the Skip Workstations field if you want to discover the Windows workstations reported by the Windows Domain Controller. 13.3.4 Pings The Ping process will: launch TCP/IP pings against monitored computers generate statistics on computer Availability in the central repository generate an Alert if the process fails because of an unavailable host summarizes the Ping process. Pings gather statistics about the availability of monitored servers. The scheduled job will Ping your servers and consider them active if it gets an answer. This is purely ICMP-protocol based - there is no measurement of individual application availability. When you create a new Ping job, you can set the following options. Computers tab Figure 13-21 shows the Computers tab, which is used to limit the scope of the computers that are to be Pinged. Figure 13-21 Ping job configuration - Computers 542 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 563. When to Ping tab The tab, When to PING, sets the frequency used for checking. We selected a frequency of 10 minutes as shown in Figure 13-22 on page 543. Figure 13-22 Ping job configuration - When to Ping Options tab On the Options tab, you specify how often the Ping statistics are saved in the database repository. By default, TotalStorage Productivity Center for Data keeps its Ping statistics in memory for eight Pings before flushing them to the database and calculating an average availability. You can change the flushing interval to another time amount, or a number of Pings (for example, to calculate availability after every 10 Pings). The system availability is calculated as: (Count of successful pings) / (Count of pings) A lower interval can increase database size, but gives you more accuracy on the availability history. We selected to save to the database at each Ping (), which means we will have an availability of 100% or of 0%, but we have a more granular view of the availability of our servers (Figure 13-23). Chapter 13. Using TotalStorage Productivity Center for Data 543
  • 564. Figure 13-23 Ping job configuration - Options Alert tab The Alert tab (shown in Figure 13-24) is used to generate Alerts for each host that is unavailable. Alert mechanisms are explained in more detail in 13.4, “OS Alerts” on page 555. You can choose any Alert type from the following: SNMP trap to send a trap to the Event manager defined in Administrative services → Configuration →General → Alert Disposition TEC Event to send an event to a Tivoli Enterprise Console Login Notification to direct the Alert to the specified user in the Alert Log (see 13.4, “OS Alerts” on page 555) Windows Event Log to generate an event to the Windows event log Run Script to run a script on the specified server Email to send a mail to the specified user through the Mail server defined in Administrative services → Configuration → General → Alert Disposition 544 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 565. Figure 13-24 Ping job configuration - Alert We selected to run a script that will send popup messages to selected administrators. The script is listed in Example 13-1. Optimally, you would send an event to a central console such as the Tivoli Enterprise Console. Note that certain parameters are passed to the script - more information is given in “Alerts tab” on page 560. Example 13-1 Script PINGFAILED.BAT net send /DOMAIN:Colorado Computer %1 did not respond to last %2 ping(s). Please check it We then saved the Ping job as PingHosts, and tested it by right-clicking and selecting Run now. As the hosts did not respond, we received notifications as shown in Figure 13-25. Figure 13-25 Ping failed popup for GALLIUM More details about the related reporting features of TotalStorage Productivity Center for DataTotalStorage Productivity Center for Data are in 13.11.3, “Availability Reporting” on page 604. 13.3.5 Probes The Probe process will: Gather Assets data on monitored computers Store data in the central repository Generate an Alert if the process fails Chapter 13. Using TotalStorage Productivity Center for Data 545
  • 566. The Probe process gathers data about the assets and system resources of Agents such as: Memory size Processor count and speed Hard disks Filesystems The data collected by the Probe process is used by the Asset Reports described in 13.11.1, “Asset Reporting” on page 595. Computers tab Figure 13-26 shows that we included the TPCUser.Default Computer Group in the Probe so that all computers, including those not yet assigned to an existing Group, will be Probed. We saved the Probe as ProbeHosts. Figure 13-26 New Probe configuration Important: Only the filesystems that have been returned by a Probe job will be available for further use by Scan, Alerts, and policy management within TotalStorage Productivity Center for Data. When to Probe tab This tab has the same configuration as for the Ping process. We set up a weekly Probe to run on Sunday for all computers. We recommend running the Probe job at a time where all the production data you want to monitor is available to the system. Alert tab As this is not a business-critical process, we asked to be alerted by mail for any failed Probe. Figure 13-27 shows the default mail text configuration for a Probe failure. 546 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 567. Figure 13-27 Probe alert - mail configuration 13.3.6 Profiles Profiles are used: In Scan jobs To limit files to be scanned To specify files tabulates to be scanned To select the summary view Directories and filesystems User ids OS user groups To set statistics retention period TotalStorage Productivity Center for Disk provides default profiles that provide data for all the default reports. Profiles are used in Scan jobs to specify: The pattern of files to be scanned The attributes of files to be gathered The summary view that will be available in reports The statistics retention period Specifying correct profiles avoids gathering unnecessary information that may lead to space problems within the repository. However, you will not be able to report on or check Quotas on files that are not used by the Profile. Data Manager comes with several default profiles, (shown in Table 13-1) prefixed with TPCUser, which can be reused in any Scan jobs you define. Table 13-1 Default profile Default profile name Description BY_ACCESS Gathers statistics by length of time since last access of files BY_CREATION Gathers statistics by length of time since creation of files Chapter 13. Using TotalStorage Productivity Center for Data 547
  • 568. Default profile name Description BY_MOD_NOT_BACKED_UP Gathers statistics by length of time since last modification (only for files not backed up since modification). Windows only BY_MODIFICATION Gathers statistics by length of time since last modification of files FILE_SIZE_DISTRIBUTION Gathers file size distribution statistics LARGEST_DIRECTORIES Gathers statistics on the n largest directories. (20 is the default amount.) LARGEST_FILES Gathers statistics on the n largest files. (20 is the default amount.) LARGEST_ORPHANS Gathers statistics on the n largest orphan files. (20 is the default amount.) MOST_AT_RISK Gathers statistics on the n files that have been modified the longest time ago and have not yet been backed up since they were modified. Windows only. (20 is the default amount.) OLDEST_ORPHANS Gathers statistics on the n oldest orphan files. (20 is the default amount.) MOST_OBSOLETE_FILES Gathers statistics on the n “most obsolete” files (i.e., files that have not been accessed or modified for the longest period of time). (20 is the default amount.) SUMMARY_BY_FILE_TYPE Summarizes space usage by file extension SUMMARY_BY_FILESYSTEM Summarizes space usage by Filesystem or Directory /DIRECTORY SUMMARY_BY_GROUP Summarizes space usage by OS Group SUMMARY_BY_OWNER Summarizes space usage by Owner TEMPORARY_FILES Gathers statistics on network-wide space consumed by temporary files WASTED_SPACE Gathers statistics on non-OS files not accessed in the last year and orphaned files Those default profiles, when set in a Scan job, gather data needed for all the default Data Manager reports. As an example, we will define an additional Profile to limit a Scan job to the 500 largest Postscript or PDF files unused in the last six months. We also want to keep weekly statistics at a filesystem and directory level for two weeks. Statistics tab On the Statistics tab (shown in Figure 13-28), we specified: Retain filesystem summary for two weeks Gather data based on creation data Select the 500 largest files The Statistics tab is used to specify the type of data that is gathered, and has a direct impact on the type of reports that will be available. In our specific case, the Scan associated with this profile will not create data for reports based on user IDs and users groups. Neither will it create data for reports on directory size. 548 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 569. Figure 13-28 New Profile - Statistics tab The Summarize space usage by section of the Statistics tab specifies how the space usage data must be summarized. If no summary level is checked, the data will not be summarized, and therefore will not be available for reporting in the corresponding level of Usage Reporting section of TotalStorage Productivity Center for Data. In our particular case, because we select to summarize by filesystem and directory, we will see space used by PDF and Postscript files at those levels, providing we set up the Scan profile correctly. See 13.3.7, “Scans” on page 552 for information on this. We will not see which users or groups have allocated those PDF and Postscript files. Restriction: For Windows servers, users and groups statistics will not be created for FAT filesystems. The Accumulate history section sets the retention period of the collected data. In this case, we will see a weekly summary for the last two weeks. The Gather statistics by length of time since section sets the base date used to calculate the file load. It determines if data will be gathered and summarized for the Data Manager → Reporting → Usage → Files reporting view. The Gather information on the section sets the amount of files to retrieve for each of the report views available under Data Manager → Reporting → Usage → Access Load. Files filter tab The Files filter tab is used to limit the scope of files that are returned by the Scan job. To create a selection, right-click the All files selected context-menu option as shown in Figure 13-29. Chapter 13. Using TotalStorage Productivity Center for Data 549
  • 570. Figure 13-29 New Profile - File filter With the New Condition menu, you can create a single filter on the files while the New Group enables you to combine several conditions with: All of The file is selected if all conditions are met (AND) Any of The file is selected if at least one condition is met (OR) None of The file is NOT selected if at least one condition is met (NOT OR) Not all of The file is selected if none of the conditions are met (NOT AND) The Condition Group can contain individual conditions or other condition groups. Each individual condition will filter files based on one of the listed items: Name Last access time Last modified Creation time Owner user ID Owner group Windows files attributes Size Type Length We want to select files that meet our conditions: (name is *.ps or name is *.pdf) and unused since six months. The AND between our two conditions will be translated to All of, while the OR within our first condition will be translated to Any of. On the screen shown in Figure 13-29, we selected New Group. From the popup screen, Figure 13-30, we selected All of and clicked OK. 550 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 571. Figure 13-30 New Condition Group Now, within our All of group we will create one dependant Any of group using the same sequence. The result is shown in Figure 13-31. Figure 13-31 New Profile - Conditions Groups Now, we create individual conditions within each group by right-clicking New Condition on the group where the conditions must be created. Figure 13-32 shows the creation of our first condition for the Any of group. We enter in our file specifications (*.ps and *.pdf) here. Figure 13-32 New Profile - New condition Chapter 13. Using TotalStorage Productivity Center for Data 551
  • 572. We repeated the operation for the second condition (All of). The final result is shown in Figure 13-33. Figure 13-33 New Profile - Conditions The bottom of the right pane shows the textual form of the created condition. You can see that it corresponds to our initial condition. We saved the profile as PS_PDF_FILES (Figure 13-34). Figure 13-34 Profile save 13.3.7 Scans The Scan process is used to gather data about files and to summarize Usage statistics as specified in the associated profiles. It is mandatory for Quotas and Constraints management. The Scan process gathers statistics about the usage and trends of the server storage. Scan job results are stored in the repository and supply the data necessary for the Capacity, Usage, Usage Violations, and Backup Reporting facilities. To create a new Scan job, Data Manager → Monitoring → Scans, right-click and select Create Scan. The scope of each Scan job is set by five different tabs on the right pane. Filesystems tab You can specify a specific filesystem for one computer, a filesystem Group (see “Filesystem Groups” on page 537) or all filesystems for a specific computer. Only the filesystems you have selected will be scanned. Figure 13-35 shows how to configure the Scan to gather data on all our servers. Note: Only filesystems found by the Probe process will be available for Scan. 552 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 573. Figure 13-35 New Scan configuration - Filesystem tab Directory Groups tab Use this tab to extend the scope of the Scan and also summarize data for the selected directories. Only directories in the previously selected filesystems will be scanned. Profiles tab As explained in 13.3.6, “Profiles” on page 547, the Profiles are used to select the files that are scanned for information gathering. A Scan job scans and gathers data only for files that are scoped by selected Profiles. You can specify Profiles at two levels: Filesystems: All selected filesystems will be scanned and data summarized for each filesystem. Directory: All selected directories (if included in the filesystem) will be scanned and data summarized for each directory. Chapter 13. Using TotalStorage Productivity Center for Data 553
  • 574. Figure 13-36 shows how to configure a Scan to have data summarized at both the filesystem and directory level. . Figure 13-36 New Scan configuration - Profiles tab When to SCAN tab As with the Probe and Ping jobs, the scheduling of the job is specified on the When to Scan tab. Alert tab You can be alerted through mail, script, Windows Event Log, SNMP trap, TEC event, or Login notification if the Scan job fails. The Scan job may fail if an Agent is unreachable. Click the floppy icon to save your new Scan job, shown in Figure 13-37. Figure 13-37 New Scan - Save 554 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 575. Putting it all together Table 13-2 summarizes the reports views for filesystems and directories that will be available depending on the settings of the Profiles and the Scan jobs. We assume the Profiles have been defined with the Summarize space by Filesystem/Directory option. Note that in order to get reports by filesystem or directory, you need to select either or both in the Scan Profile. Table 13-2 Profiles/Scans versus Reports Scan Jobs settings Available reports Filesystem Directory Filesystem Directory What is scanned By Filesystem By Directory /Computer profile profile Reports Reports x - - - FS - - x x - - FS - - Dir if in specified FS x x x - FS x - Dir if in specified FS x x x x FS x x Dir if in specified FS x x x FS x Dir scanned if in specified FS x - x x FS x - x - - x FS - - 13.4 OS Alerts TotalStorage Productivity Center for Data enables you to define Alerts on computers, filesystems, and directories. Once the Alerts are defined, it will monitor the results of the Probe and Scan jobs, and will trigger an Alert when the threshold or the condition is met. TotalStorage Productivity Center for Data provides a number options for Alert mechanisms from which you can choose depending on the severity you assign to the Alert. Depending on the severity of the triggered event or the functions available in your environment, you may want to be alerted with: Chapter 13. Using TotalStorage Productivity Center for Data 555
  • 576. An SNMP trap to an event manager. Figure 13-38 shows a Filesystem space low Alert as displayed in our SNMP application, IBM Tivoli NetView. Defining the event manager is explained in 8.5, “Alert Disposition” on page 316. Figure 13-38 Alert - SNMP trap sample A TEC (Tivoli Enterprise Console) event. An entry in the Alert Log (see Figure 13-39). You can configure Data Manager, so that the Alert Log will be automatically displayed when you log on to the GUI by using Preferences → Edit General (see Figure 13-40). Figure 13-39 Alert - Logged alerts sample 556 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 577. Figure 13-40 Alert - Preferences An entry in the Windows Event log, as shown in Figure 13-41. This is useful for lower severity alerts or when you are monitoring your Windows event logs with an automated tool such as IBM Tivoli Distributed Monitoring. Figure 13-41 Alerts - Windows Event Viewer sample Running a specified script - The script runs on the specified computer with the authority of the Agent (root or Administrator). See 13.5.5, “Scheduled Actions” on page 582 for special considerations with scripts execution. An e-mail - TotalStorage Productivity Center for Data must be configured with a valid SMTP server and port as explained in 8.5, “Alert Disposition” on page 316. Chapter 13. Using TotalStorage Productivity Center for Data 557
  • 578. 13.4.1 Alerting navigation tree Figure 13-42 shows the complete navigation tree for OS Alerting which includes Computer Alerts, Filesystem Alerts, Directory Alerts, and Alert Log. Figure 13-42 OS Alerting tree 558 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 579. Except for the Alert Log, you can create multiple definitions for each of those Alert features of TotalStorage Productivity Center for Data. To create a new definition, right-click the feature and select Create <feature>. Figure 13-43 shows how to create a new Filesystem Alert. Figure 13-43 Filesystem alert creation Chapter 13. Using TotalStorage Productivity Center for Data 559
  • 580. 13.4.2 Computer Alerts Computer Alerts act on the output of Probe jobs (see 13.3.5, “Probes” on page 545) and generate an Alert for each computer that meets the triggering condition. Figure 13-44 shows the configuration screen for a Computer Alert. Figure 13-44 Computer alerts - Alerts Alerts tab The Alerts tab contains two parts: Triggering condition to specify the computer component you want to be monitored. You can monitor a computer for: – RAM increased – RAM decreased – Virtual Memory increased – Virtual Memory decreased – New disk detected – Disk not found – New disk defect found – Total disk defects exceed. You will have to specify a threshold. – Disk failure predicted – New filesystem detected Information about disk failures is gathered through commands against disks with the following exceptions: – IDE disks do support only Disk failure predicted queries – AIX SCSI disks do not support failures and predicted failures queries Triggered action where you specify the action that must be executed. If you choose to run a script, it will receive several positional parameters that depends on the triggering condition. The parameters display on the Specify Script panel, which is accessed by checking Run Script and clicking the Define button. 560 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 581. Figure 13-45 shows the parameters passed to the script for a RAM decreased condition. Figure 13-45 Computer alerts - RAM decreased script parameters Figure 13-46 shows the parameters passed to the script for a Disk not found condition. Figure 13-46 Computer alerts - Disk not found script parameters Computers tab This limits the Alert process to specific computers or computer Groups (Figure 13-47). Figure 13-47 Computer alerts - Computers tab Chapter 13. Using TotalStorage Productivity Center for Data 561
  • 582. 13.4.3 Filesystem Alerts Filesystem Alerts will act on the output of Probe and Scan jobs and generate an Alert for each filesystem that meets the specified threshold. Figure 13-48 shows the configuration screen for a Filesystem Alert. Figure 13-48 Filesystem Alerts - Alert Alerts tab As for Computer Alerts, the Alerts tab contains two parts. In the Triggering condition section you can specify to be alerted if a: Filesystem is not found, which means the filesystem was not mounted during the most recent Probe or Scan. Filesystem is reconfigured. Filesystem free space is less than a threshold specified in percent, KB, MB, or GB. Free UNIX filesystem inode count is less than a threshold (either percent or inodes count). 562 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 583. You can choose to run a script (click the Define button next to Run Script), or you can also change the content of the default generated mail by clicking Edit Email. You will see a popup with the default mail skeleton which is editable. Figure 13-49 shows the default e-mail message. Figure 13-49 Filesystem alert - Freespace default mail 13.4.4 Directory Alerts Directory Alerts will act on the output of Scan jobs. Alerts tab Directory Alerts configuration is similar to Filesystem alerts. The supported triggers are: Directory not found Directory consumes more than the specified threshold set in percent, KB, MB or GB. Directories tab Since Probe jobs do not report on directories and Scan jobs report only on directories. if a directory Profile has been assigned (See “Putting it all together” on page 555) you can only choose to be alerted for any directory that has already been included in a Scan and actually scanned. Chapter 13. Using TotalStorage Productivity Center for Data 563
  • 584. 13.4.5 Alert logs The Data Manager → Alerting → Alert log menu (Figure 13-50) lists all Alerts that have been generated. Figure 13-50 Alerts log There are nine different views. Each of them will show only the Alerts related to the selected view except: All view - Shows all Alerts Alerts Directed to <logged user> - Shows all Alerts where the current logged user has been specified in the Login notification field When you click the icon on the left of a listed Alert, you will see detailed information on the selected Alert as shown in Figure 13-51. 564 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 585. Figure 13-51 Detailed Alert information 13.5 Policy management The Policy Management functions of Data Manager enable you to: Define space limits (Quotas) on storage resources used by user IDs and user groups. These limits can be set at a network (whole environment), computer, and filesystem level. To define space limits (Quotas) on NAS resources used by user IDs and user groups. To perform checks (Constraints) on specific files owned by the users and perform any action on those files. Define a filesystem extension policy can be used to automatically increase filesystem capacity for managed hosts when utilization reaches a specified level. The LUN provisioning option can be enabled to extend filesystems within an ESS. To schedule scripts against your storage resources. 13.5.1 Quotas Quotas can be set at either a user or at an OS User Group level. For the OS User Group level, this could be either an OS User Group, (see “OS User Group Groups” on page 540), or a standard OS group (such as system on UNIX, or Administrators on Windows). The User Quotas trigger an action when one of the monitored users has reached the limit while the OS User group Quotas trigger the action when the sum of space used by all users of monitored groups has reached the limit. The Quotas definition mechanism is the same for both except for the following differences: The menu tree to use: – Data Manager → Policy Management → Quotas → User – Data Manager → Policy Management → Quotas → OS User group Chapter 13. Using TotalStorage Productivity Center for Data 565
  • 586. The monitored elements you can specify: – User and user groups for User Quotas – OS User Group and OS User Group Groups for OS User Group Quota We will show how to configure User Quotas. User Group Quotas are configured similarly. Note that the Quota enforcement is soft - that is, users are not automatically prevented from exceeding their defined Quota, but the defined actions will trigger if that happens. There are three sub-entries for Quotas: Network Quotas, Computer Quotas, and Filesystem Quotas Network Quotas A Network Quota defines the maximum cumulated space a user can occupy on all the scanned servers. An Alert will be triggered for each user that exceeds the limit specified in the Quota definition. Use Data Manager → Policy Management → Quotas → User → Network, right-click and select Create Quota to create a new Quota. The right pane displays the Quota configuration screen with four tabs. Users tab Figure 13-52 shows the Users tab for Network Quotas. Figure 13-52 User Network Quotas - Users tab From the Available column, select any user ID or OS User Group you want to monitor for space usage. The Profile pull-down menu is used to specify the file types that will be subject to the Quota. The list will display all Profiles that create summaries by user (by file owner). Select the Profile you want to use from the pull-down. The default Profile Summary by Owner collects information about all files and summarizes them on the user level. The ALLGIFFILES profile collects information about GIF files and creates a summary at a user level as displayed in Figure 13-53. This (non-default) profile was created using the process shown in 13.3.6, “Profiles” on page 547. 566 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 587. Figure 13-53 Profile with user summary Using this profile option, we can define general Quotas for all files and more restrictive Quotas for some multimedia files such as GIF and MP3. Filesystem tab On the Filesystem tab shown Figure 13-54, select the filesystems or computers you want to be included in the space usage for Quota management. Figure 13-54 User Network Quotas - Filesystem tab In this configuration, for each user, his cumulated space usage on all servers will be calculated and checked against the Quota limit. Chapter 13. Using TotalStorage Productivity Center for Data 567
  • 588. When to check The Quota management is based on the output of the Scan jobs. Therefore, each Quota definition must be scheduled to run after the Scan jobs that collect the adequate information. The When to CHECK tab is standard, and allows you to define a one off or a recurring job. Alert tab On the Alert tab, specify the Quota limit in: KB, MB or GB, and the action to run when the Quota is exceeded. Figure 13-55 User Network Quotas - Alert tab You can choose from the standard Alerts type available with TotalStorage Productivity Center for Data. Each Alert will be fired once for each user exceeding their Quota. We have selected to run a script that we wrote, QUOTAUSERNET.BAT, listed in Example 13-2. Example 13-2 QUOTAUSERNET.BAT script echo NETWORK quota exceeded - %1 %2 uses %3 - Limit set to %4 >>quotausernet.txt Example 13-3 shows the output file created by QUOTAUSERNET.BAT. Example 13-3 Content of quotausernet.txt NETWORK quota exceeded - user root uses 11.16GB - Limit set to 5.0GB NETWORK quota exceeded - user Administrators@BUILTIN uses 11.97GB - Limit set to 5GB The Alert has fired for user root and Administrators. This clearly shows that administrative users such as root and Administrators should not normally be included in standard Quotas monitoring. 568 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 589. Computer Quotas Computer Quotas enable you to fire Alerts when a user exceeds their space Quota on a specific computer as shown in Figure 13-56. Multiple Alerts are generated if a user violates the Quota on separate computers. Figure 13-56 Computer Quota - Alerts log Filesystem Quotas A Filesystem Quota defines a space usage limit at the filesystem level. An Alert will be fired for each filesystem where a user exceeds the limit specified in the Quota definition. Use Data Manager → Policy Management → Quota → User → Filesystem, right-click, and select Create Quota to create a new Quota. After setting up and running a Quota for selected filesystems, we received the following entries in the Alert History, shown in Figure 13-57. Figure 13-57 Filesystem Quota - Alerts log Chapter 13. Using TotalStorage Productivity Center for Data 569
  • 590. 13.5.2 Network Appliance Quotas Using Data Manager → Policy Management → Network Appliance Quotas → Schedules, you can compare the space used by users against Quotas defined inside Network Appliance filers, using the appropriate software, and raise an Alert whenever a user is close to reaching the NetApp Quota. When you run a Network Appliance Quota job, the NetApp Quota definitions will be imported into TotalStorage Productivity Center for Data for read-only purposes. Note: Network Appliance Quotas jobs must be scheduled after the Scan jobs, since they use the statistics gathered by the latest Scan to trigger any NetApp Quota violation. With Data Manager → Policy Management → Network Appliance Quotas → Imported User Quotas and Imported OS User Group Quotas, you can view the definitions of the Quotas defined on your NetApp filers. 13.5.3 Constraints The main features of Constraints are listed in Figure 13-58. Constraints Reports and triggers actions based on specific files which use too much space on monitored servers Files can be selected based on server and filesystem name pattern (eg: *.mp3, *.avi) owner age size attributes Actions triggered through standard Alerting mechanism when total space used by files exceeds a threshold ibm.com/redbooks Figure 13-58 Constraints Constraints are used to generate Alerts when files matching specified criteria are consuming too much space on the monitored servers. 570 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 591. Constraints provide a deeper level of Data Management. Quotas will allow reporting on users who have exceeded their space limitations. With Constraints, we can get more detailed information to specify limits on particular file types or other attributes, such as owner, age, and so on. The output of a Constraint when applied to a Scan will return a list of the files that are consuming too much space. Note: Unlike Quotas, Constraints are automatically checked during Scan jobs and do not need to be scheduled. Also, the Scan does not need to be associated with Profiles that will cause data to be stored for reporting. Filesystems tab The Filesystems tab helps you select the computers and filesystems you want to check for the current Constraint. The selection method for computers and filesystems is the same as for Scan jobs (see 13.3.7, “Scans” on page 552). File Types tab On the File Types tab, you can explicitly allow or disallow certain file patterns (Figure 13-59). Figure 13-59 Constraint - File Types Use the buttons on the top of the screen, to allow or forbid files depending on their name. The left column shows some default file patterns, or you can use the bottom field to create your own pattern. Click >> to add your pattern to the allowed/forbidden files. Chapter 13. Using TotalStorage Productivity Center for Data 571
  • 592. Users tab The Users tab (shown in Figure 13-60) is used to allow or restrict the selected users in the Constraint. Figure 13-60 Constraint - Users Important: The file condition is logically ORed with the User condition. A file will be selected for Constraint processing if it meets at least one of the conditions. Options tab The Options tab provides additional conditions for file selection, and limits the number of selected files to store in the central repository. Once again, the conditions added in the tab will be logically ORed with the previous set in the File Types and Users tab. 572 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 593. The bottom part of the tab, shown in Figure 13-61, contains the textual form of the Condition, taking into account all the entries made in the Filesystems, File Types, Users and Options tabs. Figure 13-61 Constraints - Options You can change this condition or add additional conditions, by using the Edit Filter button. It displays the file filter popup (Figure 13-62) to change, add, and remove conditions or conditions groups as previously explained in 13.3.6, “Profiles” on page 547. Figure 13-62 Constraints - File filter Chapter 13. Using TotalStorage Productivity Center for Data 573
  • 594. We changed the file filter to a more appropriate one by changing the OR operator to AND (Figure 13-63). Figure 13-63 Constraints - File filter changed Alert tab After selecting the files, you may want to generate an Alert only if the total used space meeting the Constraint conditions exceeds a predefined limit. Use the Alert tab to specify the triggering condition and action (Figure 13-64). Figure 13-64 Constraints - Alert 574 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 595. In our Constraint definition, a script is triggered for each filesystem where the selected files exceed one Gigabyte. We select the script by checking the Run Script option and selecting Change... as shown in Figure 13-65. The script will be passed several parameters including a path to a file that contains the list of files meeting the Constraint. You can use this list to execute any action including delete or archive commands. Figure 13-65 Constraints - Script parameters Our example uses a sample script (tsm_arch_del.vbs) which is shipped with TotalStorage Productivity Center for Data, which archives all the files in the produced list to a Tivoli Storage Manager server, and then deletes them from local storage. This script is installed with TotalStorage Productivity Center for Data server, and stored in the scripts subdirectory of the server installation. It can be edited or customized if required - we recommend that you save the original files first. Versions for Windows (tsm_arch_del.vbs) and UNIX (tsm_arch_del) are provided. If you will run this Constraint on a UNIX agent, then PERL is required to be installed on the agent. A Tivoli Storage Manager server must be available and configured for this script to work. For more information on the sample scripts, see Appendix A of the IBM Tivoli Storage Resource Manager User’s Guide, SC32-9069. Chapter 13. Using TotalStorage Productivity Center for Data 575
  • 596. 13.5.4 Filesystem extension and LUN provisioning The main functions of Filesystem Extension are shown in Figure 13-66. Filesystem Extension Automates filesystem extension Supported platforms AIX using JFS SUN using VxFS Support for automatic LUN provisioning with IBM ESS Storage Subsystem Actions triggered through standard Alerting mechanism when a filesystem is performed ibm.com/redbooks Figure 13-66 Filesystem Extension 576 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 597. We use filesystem extension policy to automatically extend filesystems when utilization reaches a specified threshold. We can also enable LUN provisioning to extend filesystems within an ESS. To set up filesystem extension policy, select Data Manager → Policy Management → Filesystem Extension. Right-click Filesystem Extension and select Create Filesystem Extension Rules as seen in Figure 13-67. Figure 13-67 Create Filesystem Extension Rules Chapter 13. Using TotalStorage Productivity Center for Data 577
  • 598. In the Filesystems tab, select the filesystems which will use filesystem extension policy by moving them to the Current Selections panel. Note the Enabled checkbox - the default is to check it, meaning the rule will be active. If you uncheck the box, it will toggle to Disabled - you can still save the rule, but the job will not run. To specify the extension parameters, select the Extension tab (Figure 13-68). Figure 13-68 Filesystem Extension - Extension This tab specifies how a filesystem will be extended. An explanation of the fields is provided below. Amount to Extend We have the following options: Add - the amount of space used for extension in MB or GB, or as a percentage of filesystem capacity. Make Freespace - the amount of freespace that will be maintained in the filesystems by this policy. If freespace falls below the amount that is specified, the difference will be added. Freespace can be specified in MB or GB increments, or by a percentage of filesystem capacity. Make Capacity - the total capacity that will be maintained in the selected filesystems. If the capacity falls below the amount specified, the difference will be added. 578 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 599. Limit Maximum Filesystem Capacity? When this option is enabled, the Filesystem Maximum Capacity is used in conjunction with the Add or Make Freespace under Amount to Extend. If you enter a maximum capacity for a filesystem in the Filesystem Maximum Capacity field and the filesystem reaches the specified size, the filesystem will be removed from the policy and an Alert will be triggered. Condition for Filesystem Extension The options are: Extend filesystems regardless of remaining freespace - the filesystem will be expanded regardless of the available free space. Extend filesystems when freespace is less than - defines the threshold for the freespace which will be used to trigger the filesystem expansion. If freespace falls below this value, the policy will be executed. Freespace can be specified in MB or GB increments, or by a percentage of filesystem capacity. Note: If you select Make Capacity under Amount to Extend, the Extend filesystems when freespace is less than option is not available. Use LOG ONLY Mode Enable Do Not Extend Filesystems - Log Only when you want the policy to log the filesystem extension. The extension actions that would have taken place are written to the log file, but no extension takes place. In the Provisioning tab (Figure 13-69) we define LUN provision parameters. Note that LUN provisioning is available at the time of writing for filesystems on an ESS only. Figure 13-69 Filesystem Extension - Provisioning Chapter 13. Using TotalStorage Productivity Center for Data 579
  • 600. LUN Provisioning is an optional feature for filesystem extension. When the Enable Automatic LUN Provisioning is selected, LUN provisioning is enabled. In the Create LUNs that are at least field, you can specify a minimum size for new LUNs. If you select this option, LUNs of at least the size specified will be created. If no size is specified, then the Amount to Extend option specified for the filesystem (in “Amount to Extend” on page 578) will be used. For more information on LUN provisioning, see IBM Tivoli Storage Resource Manager 1.2 User’s Guide. The Model for New LUNs feature means that new LUNs will be created similar to existing LUNs in your setup. At least one ESS LUN must be currently assigned to the TotalStorage Productivity Center for Data Agent associated with the filesystem you want to extend. There are two options for LUN modeling: Model new LUNs on others in the volume group of the filesystem being extended - provisioned LUNS are modeled on existing LUNs in the extended filesystem’s volume group. Model new LUNs on others on the same host as the filesystem being extended - provisioned LUNS are modeled on existing LUNs in the extended filesystem’s volume group. If the corresponding LUN model cannot satisfy the requirements. it will look for other LUNs on the same host. The LUN Source option defines the location of the new LUN in the ESS, and has two options: Same Storage Pool - provisioned LUNs will be created using space in an existing Storage Pool. In ESS terminology this is called the Logical Sub System or LSS. Same Storage Subsystem - provisioned LUNs can be created in any Storage Pool or ESS LSS. The When to Enforce Policy tab (Figure 13-70) specifies when to apply the filesystem extension policy to the selected filesystems. Figure 13-70 When to Enforce Policy tab 580 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 601. The options are: Enforce Policy after every Probe or Scan automatically enforces the policy after every Probe or Scan job. The policy will stay in effect until you either change this setting or disable the policy. Enforce Policy Now enforces the policy immediately for a single instance. Enforce Policy Once at enforces the policy once at the specified time, specifying the month, day, year, hour, minute, and AM/PM The Alert tab (Figure 13-71) can define an Alert that will be triggered by the filesystem extension job. Figure 13-71 Alert tab Chapter 13. Using TotalStorage Productivity Center for Data 581
  • 602. Currently the only available condition is A filesystem extension action started automatically. Refer to “Alert tab” on page 544 for an explanation of the definitions. Important: After making configuration changes to any of the above filesystem extension options, you must save the policy, as shown in Figure 13-72. If you selected Enforce Policy Now, the policy will be executed after saving. Figure 13-72 Save filesystem changes For more information on filesystem extension and LUN provisioning, see IBM Tivoli Storage Resource Manager: A Practical Introduction. 13.5.5 Scheduled Actions TotalStorage Productivity Center for Data comes with an integrated tool to schedule script execution on any of the Agents. If a script fails due to an unreachable Agent, the standard Alert processes can be used. To create a Scheduled action, select Data Manager → Policy Management → Scheduled Actions → Scripts, right-click and select Create Script. Computers tab On the Computers tab, select the computers or computer groups to execute the script. Script Options tab From the pull-down field, select a script that exists on the server. You can also enter the name of a script not yet existing on the server or that only resides on the Agents. 582 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 603. The Script options tab is shown in Figure 13-73. Figure 13-73 Scheduled action - Script options The Script Name pull-down field lists all files (including non-script files) in the servers’ script directory. Attention: For Windows Agents, the script must have an extension that has an associated script engine on the computer running the script (for example: .BAT, .CMD, or .VBS). For UNIX Agents: The extension is removed from the specified script name The path to the shell (for example, /bin/bsh, /bin/ksh) must be specified in the first line of the script If the script is located in a Windows TotalStorage Productivity Center for Data Server scripts directory, the script must have been created on a UNIX platform, and then transferred in binary mode to the Server or you can use UNIX OS tools such as dos2unix to convert the scripts. This will ensure that the CR/LF characters are correctly inserted for execution under UNIX. When to Run tab As for other Data Manager jobs, you can choose to run a script once or repeatedly at a predefined interval. Alerts tab With the Alert tab you can choose to be notified when a script fails due to an unreachable Agent or a script not found condition. The standard Alert Mechanism described in 13.4, “OS Alerts” on page 555 is used. 13.6 Database monitoring The Monitoring functions of Data Manager are extended to databases when the license key is enabled (8.4, “Configuring Data Manager for Databases” on page 313). Currently, MS SQL-Server, Oracle, DB2, and Sybase are supported. Chapter 13. Using TotalStorage Productivity Center for Data 583
  • 604. We will now review the Groups, Probes, Scans, and Profiles definitions for Data Manager for Databases, and show the main differences compared to the core Data Manager monitoring functions. Figure 13-74 shows the navigation tree for Data Manager for Databases. Figure 13-74 Databases - Navigation Tree 13.6.1 Groups To get targeted monitoring of your database assets, you can create Groups consisting of: Computers Databases-Tablespaces Tables Users Computer Groups All databases residing on the selected computers will be probed, scanned, and managed for Quotas. The groups you have created using TotalStorage Productivity Center for Data remain available for TotalStorage Productivity Center for Data for Databases. If you create a new Group, the computers you put in it will be removed from the Group they currently belong to. To create a Computer Group, use Data Manager - Databases → Monitoring → Groups → Computer, right-click, and select Create Computer Group. “Computer Groups” on page 536 gives more information on creating Computer Groups. Databases-Tablespaces Groups Creating Groups with specific databases and tablespaces may be useful for applying identical management rules for databases with the same functional role within your enterprise. 584 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 605. An example could be to create a group with all the Oracle-Server system databases, as you will probably apply the same rules for space and alerting on those databases. This is shown in Figure 13-75. Figure 13-75 Database group definition Table Groups You can use Table Groups to create Groups of the same set of tables for selected or all database instances. You can use two different views to create a table group: Tables by instance selects several tables for one instance. Instances by table selects several instances for one table. You can combine both views as each entry you add will be added to the group. User Groups As for core TotalStorage Productivity Center for Data, you can put user IDs in groups. The user groups you create will be available for the whole TotalStorage Productivity Center for Data product set. Tip: The Oracle and MS SQL-Server user IDs (SYSTEM, sa, ...) are also included in the available users list after the first database Probe. 13.6.2 Probes The Probe process is used to gather data about the files, instances, logs, and objects that make up monitored databases. The results of Probe jobs are stored in the repository and are used to supply the data necessary for Asset Reporting. Use Data Manager - Databases → Monitoring → Probe, right-click, and select Create Probe to define a new Probe job. In the Instance tab of the Probe configuration, you can select specific instances, computers, and computer groups (Figure 13-76). Chapter 13. Using TotalStorage Productivity Center for Data 585
  • 606. Figure 13-76 Database Probe definition The Computers list contains only computers that have been defined for Data Manager for Databases. The definition procedure is described in “Configuring Data Manager for Databases” on page 313. 13.6.3 Profiles As for TotalStorage Productivity Center for Data, Profiles in Data Manager for Databases are used to determine the databases attributes that are to be scanned. They also determine the summary level and retention time to keep in the repository. Use Data Manager - Databases → Monitoring → Profiles, right-click, and select Create Profile to define a new profile. Figure 13-77 shows the Profile definition screen. Figure 13-77 Database profile definition 586 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 607. You can choose to gather data on tables size, database extents, or database free space and summarize the results at the database or user level. 13.6.4 Scans Scan jobs in Data Manager for Databases collect statistics about the storage usage and trends within your databases. The gathered data is used as input to the usage reporting and Quota analysis. Defining a Scan job requires defining: The database, computer, and instances to Scan The tables to monitor for detailed information such as size, used space, indexes, rows count The profile that will determine the data that is gathered and the report views that will be made available by the Scan The job scheduling frequency Oracle-only additional options to gather information about pages allocated to a segment that has enough free space for additional rows The alerting mechanism to use should the Scan fail All this information is set through the Scan definition screen that contains one tab for each previously listed item. To define a new Scan, select Data Manager - Databases → Monitoring → Scans, right-click and select Create Scan as in Figure 13-78. Figure 13-78 Database Scan definition Note: If you request detailed scanning of tables, the tables will only be scanned if their respective databases have also been selected for scanning. Chapter 13. Using TotalStorage Productivity Center for Data 587
  • 608. 13.7 Database Alerts TotalStorage Productivity Center for Data for Databases enables you to define Alerts on instances, databases, and tables. The Probe and Scan jobs output are processed and compared to the defined alerts. If a threshold is reached, an Alert will be triggered. Tivoli Storage Resource Manage for Databases uses the standard Alert mechanisms described in 13.4, “OS Alerts” on page 555. 13.7.1 Instance Alerts Data Manager - Databases → Alerting → Instance Alerts, right-click and select Create Alert lets you define some alerts as shown in Table 13-3. Those Alerts are triggered during the Probe process. Table 13-3 Instance Alerts Alert type Oracle Sybase MSSQL New database discovered x x New tablespace discovered x Archive log contains more than X units x New device discovered x Device dropped x Device free space greater than X units x Device free space less than X units x An interesting Alert is the Archive Log Directory Contains More Than for Oracle, since the Oracle application can hang if there is no more space available for its archive log. This Alert can be used to monitor the space used in this specific directory and trigger a script that will archive the files to an external manager such as Tivoli Storage Manager once the predefined threshold is reached. For a detailed example, refer to IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886. 13.7.2 Database-Tablespace Alerts To define a Database-Tablespace Alert, select Data Manager - Databases → Alerting → Database-Tablespace Alerts, right-click, and select Create Alert. You can define various monitoring options on your databases as shown in Table 13-4. Those Alerts are triggered during the Probe process. Table 13-4 Instance alerts Alert type Oracle Sybase MSSQL Database/Tablespace freespace lower than x x x Database/Tablespace offline x x x Database/Tablespace dropped x x x Freespace fragmented in more than n extents x Largest free extent lower than x 588 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 609. Alert type Oracle Sybase MSSQL Database Log freespace lower than x x Last dump time previous to n days x 13.7.3 Table Alerts To define a new Table Alert, use Data Manager - Databases → Alerting → Table Alerts, right-click, and select Create Alert. With this option you can set up monitoring on database tables. The Alerts that can be triggered for a table as shown in Table 13-5 below. Those Alerts are triggered during the Scan processes and only if the Scan includes a Table Group. Table 13-5 Table alerts Alert type Oracle Sybase MsSQL Total Table Size Greater Than x x x Table Dropped x x x (Max Extents - Allocated) < x Segment Has More Than x Chained Row Count Greater Than x Empty Used Segment Space Exceeds x Forwarded Row Count Greater Than x 13.7.4 Alert log The Data Manager - Databases → Alerting → Alert Log menu lists all Alerts that have been fired by the Probe jobs, the Scan jobs, the defined Alerts, and the violated Quotas. Tip: Please refer to 13.4.5, “Alert logs” on page 564 for more information about using the Alert log tree. 13.8 Databases policy management The Policy Management functions of Data Manager for Databases enable you to: Define space limits (Quotas) on database space used by tables owners. Those limits can be set at a network (whole environment), at an instance or at a database level. Schedule scripts against your database resources. Chapter 13. Using TotalStorage Productivity Center for Data 589
  • 610. 13.8.1 Network Quotas A Network Quota will define the maximum cumulated space a user can occupy on all the scanned databases. An Alert will be fired for each user that exceeds the limit specified in the Quota definition. We used Data Manager - Databases → Policy Management → Quotas → Network, right-click and select Create Quota to create a new Quota. The right pane will switch to a Quota configuration screen with four tabs. Users tab On the Users tab, specify the database users you want to be monitored for Quotas. You can also select a profile in the Profile pull-down field on the top right of the tab. In this field, you can select any Profile that stores summary data on a user level. The Quota will only be fired for databases that have been scanned using this Profile (Figure 13-79). Figure 13-79 Database Quota - Users tab Database-Tablespace tab Use this tab to restrict Quota checking to certain databases. You can choose several databases or computers. If you choose a computer, all the databases running on it will be included for Quota management. When to run tab As with Data Manager, you can select the time to run from: Immediate Once at a schedule date and time Repetitive at predefined intervals Alert tab On the Alert tab you can specify the space limit allowed for each user and the action to run. If no action is selected, the Quota violation will only be logged in the Alert log. 590 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 611. 13.8.2 Instance Quota The Instance Quota mechanism is similar to the Network Quota, except that it is set at the instance level. Whenever a user reaches the Quota on one instance, an Alert will be fired. 13.8.3 Database Quota With Database Quota, the Quota is set at the database level. Each monitored user will be reported back as soon as he reaches the limit on at least one of the monitored database. 13.9 Database administration samples We now list some typical checks done regularly by Oracle database administrators and show how they can be automated using Data Manager for Databases. 13.9.1 Database up Data Manager for Databases can be used to test for database availability using Probe and Scan jobs since they will fail and trigger an Alert if either the database or the listener is not available. Since those jobs use system resources to execute, you may instead choose scheduled scripts to test for database availability. Due to limited scheduling options and the need for user-written scripts, we recommend using dedicated monitoring products such as Tivoli Monitoring for Databases. 13.9.2 Database utilization There are a number of different levels where system utilization can be monitored and checked in a database environment. Tablespace space usage This is a standard Alert provided by Data Manager for Databases. This Alert will be triggered by the Probe jobs. Archive log directory space usage This is a standard alert provided by Data Manager for Databases. This Alert will be triggered by the Probe jobs as shown in 13.7.1, “Instance Alerts” on page 588. Maximum extents used Your application may become unavailable if a table reaches its maximum allowed number of extents. This is an indicator that can be monitored using the (Max Events - Allocated Extents) < Table Alert. 13.9.3 Need for reorganization To ensure good application performance, it is important to be notified promptly if a database reorganization is required. Count of Used table extents You can monitor for table reorganization need using the table Alert trigger Segment has more than n extents. Chapter 13. Using TotalStorage Productivity Center for Data 591
  • 612. Count of chained rows Chained rows can have an impact on database access performance. This issue can be monitored using the Chained Row Count Greater than table Alert trigger. Count of Used table extents You can monitor the need for table reorganization using the table Alert trigger Segment has more than n extents. Freelist count You cannot monitor the count of freelists in an Oracle table using Data Manager for Databases. 13.10 Data Manager reporting capabilities The reporting capabilities of Data Manager are very rich, with over 300 predefined views. You can see the data from a very high-level; for example, the total amount of free space available over the enterprise; or from a low-level, for example, the amount of free space available on a particular volume or a table in a database. The data can be displayed in tabular or graphical format, or can be exported as HTML, Comma Separated Variable (CSV), or formatted report files. The reporting function uses the data stored in the Data Manager repository. Therefore, in order for reporting to be accurate in terms of using current data, regular discovery, Ping, Probe, and Scan jobs must be scheduled. These jobs are discussed in 13.3, “OS Monitoring” on page 533. Figure 13-80 shows the Data Manager main screen with the reporting options highlighted. The Reporting sections are used for interactive reporting. They can be used to answer ad hoc questions such as, “How much free space is available on my UNIX systems?” Typically, you will start looking at data at a high-level and drill down to find specific detail. Much of the information can also be displayed in graphical form as well as in the default table form. The My Reports sections give you access to predefined reports. Some of these reports are pre-defined by Data Manager, others can be created by individual users saving reporting criteria in the Reporting options. You can also set up Batch Reports to create reports automatically on a schedule. My Reports will be covered in more detail in 13.14, “Creating customized reports” on page 683, and 13.15, “Setting up a schedule for daily reports” on page 697. The additional feature, TotalStorage Productivity Center for Data for Chargeback produces storage usage Chargeback data, as described in 13.17, “Charging for storage usage” on page 700. 592 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 613. Predefined reports provided by TotalStorage Productivity Center for Data Reports customized and saved by user tpcadmin Schedule reports to run in batch mode Interactive reporting options Database reporting options Figure 13-80 TotalStorage Productivity Center for Data main screen showing reporting options 13.10.1 Major reporting categories Data Manager collects data for reporting purposes in seven major categories. These will be covered in the following sections. Within each major category there are a number of sub-categories. Most categories are available for both operating system level reporting and database reporting. However, a few are for operating system reporting only. The description of each category specifies which applies, and in the more detailed following sections for each category, we present the capabilities separately for both Data Manager and Data Manager for Databases as appropriate. Asset Reporting Asset data is collected by Probe processes and reports on physical components such as systems, disk drives, and controllers. Currently, Asset Reporting down to the disk level is only available for locally attached devices. Asset Reporting is available for both operating system and database reporting. Chapter 13. Using TotalStorage Productivity Center for Data 593
  • 614. Storage Subsystems Reporting Storage Subsystem data is collected by Probe processes. It provides a mechanism for viewing storage capacity at a computer, filesystem, storage subsystem, LUN, and disk level. These reports also enable you to view the relationships among the components of a storage subsystem. Storage Subsystem reporting is currently only available for IBM TotalStorage Enterprise Storage Servers (ESS). Storage Subsystems Reporting is available for operating system only. Availability Reporting Availability data is collected by Ping processes and allows you to report on the availability of your storage resources and computer systems. Availability Reporting is provided for operating system reporting only. Capacity Reporting Capacity Reporting shows how much storage you have and how much of it is being used. You can report at anywhere from an entire network level down to an individual filesystem. Capacity Reporting is provided for both operating system and database reporting. Usage Reporting Usage Reporting goes down a level from Capacity Reporting. It is concerned not so much with how much space is in use, but rather with how the space is actually being used for. For example, you can create a report that shows usage by user, or a wasted space report. You define what wasted space means, but it could be for example files of a particular type or files within a certain directory, which are more than 30 days old. Usage Reporting is provided for both operating system and database reporting. Usage Violation Reporting Usage Violation Reporting allows you to set up rules for the type and/or amount of data that can be stored, and then report on exceptions to those rules. For example, you could have a rule that says that MP3 and AVI files are not allowed to be stored on file servers. You can also set Quotas for how much space an individual user can consume. Note that usage violations are only softly enforced - Data Manager will not enforce the rules in real time, but will generate an exception report after the fact. Usage Violation Reporting is provided for both operating system and database reporting. Backup Reporting Backup Reporting identifies files that have not been backed up. Backup Reporting is provided for operating system reporting only. 13.11 Using the standard reporting functions This section discusses Data Manager’s standard reporting capabilities. Customized reporting is covered in 13.14, “Creating customized reports” on page 683. This section is not intended to cover exhaustively all of the reporting options available, as these are very numerous, and are covered in detail in the Reporting section of the manual IBM Tivoli Storage Resource Manager V1.1 Reference Guide SC32-9069. Instead, this section provides a basic overview of Data Manager reporting, with some examples of what types of reports can be produced, and additional information on some of the less straightforward reporting options. 594 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 615. To demonstrate the reporting capabilities of TotalStorage Productivity Center for Data, we installed the Server code on a Windows 2000 system called Colorado, and deployed these Windows Agents: Gallium Wisla Lochness Colorado is also an Agent as well as being the Server. The host GALLIUM has both Microsoft SQL-Server and Oracle database installed to demonstrate database reporting. The Agent on LOCHNESS also provides data for a NAS device call NAS200. The Agent on VMWAREW2KSRV1 also provides data for a NetWare server called ITSOSJNW6. The lab setup is shown in Figure 13-81. Tivoli Storage Resource Manager: Lab Environment ITSRM Scan ITSRM Database A23BLTZM WNT LOCHNESS ITSRM W2K Agent & ITSRM GUI Server Ethernet NetWare VMWAREW2KSRV1 SOL-E GALLIUM CRETE BRAZIL IBM EASTER W2K (Vmware) Solaris W2K AIX AIX NAS200 HP-UX ITSRM ITSRM ITSRM ITSRM ITSRM ITSRM Agent Agent Agent Agent Agent Agent ITSRM Scan ibm.com/redbooks VMWAREW2KSRV1 W2K (Vmware) ITSRM A t Figure 13-81 TotalStorage Productivity Center for Data Lab Environment 13.11.1 Asset Reporting Asset Reporting provides configuration information for the TotalStorage Productivity Center for Data Agents. The information available includes typical asset details such as disk system name and disk capacities, but provides a large amount of additional detail. IBM TotalStorage Productivity Center for Data Figure 13-82 shows the major subtypes within Asset Reporting. Note that unlike the other reporting categories where most of the drill-down functions are chosen from the right-hand panel, in Asset Reporting the drill-down functions are mostly available on the left-hand pane. Chapter 13. Using TotalStorage Productivity Center for Data 595
  • 616. Figure 13-82 Reporting - Asset By Cluster View Click By Cluster to drill down into a virtual server or cluster node. You can drill down further to a specific controller to see the disks under it and/or drill down on a disk to see the file systems under it. By Computer view Click By Computer to see a list of all of the monitored systems (Figure 13-83.) Figure 13-83 Reporting - Asset - By Computer 596 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 617. From there we can drill down on the assets associated with each system. We will take a look at node GALLIUM. In Figure 13-84 we have shown most of the items for GALLIUM expanded, with the details for Disk 2 displayed in the right-hand bottom pane. You will see a detailed level of information, both in terms of the type of objects for which data is collected (for example, Exports or Shares), and the specific detail for a given device. Figure 13-84 Report - GALLIUM assets By OS Type view This view of the Asset data provides the same information as the By Computer view, with the difference that the Agent systems are displayed sorted by operating system platform. By Storage Subsystem view Data Manager provides reporting for storage subsystems, any disk array subsystems whose SMI-S Providers are CTP certified by SNIA for SMI-S 1.0.2, and IBM SAN Volume Controller clusters. For disk array subsystems, you can view information about: – Disk groups (for IBM TotalStorage ESS subsystems) – Array sites (for IBM TotalStorage DS6000/8000 only) – Ranks (for IBM TotalStorage DS6000/8000 only) – Storage pools (for disk array subsystems) – Disks (for disk array subsystems) – LUNs (for disk array subsystems) For IBM SAN Volume Controllers, you can view information about – Managed disk groups – Managed disks – Virtual disks Chapter 13. Using TotalStorage Productivity Center for Data 597
  • 618. System-wide view The System-wide view however does provide additional capability, as it can give a System-wide view rather than a node-by-node view of some of the data. A graphical view of some of the data is also available. Figure 13-85 shows most of the options available from the System-wide view and in the main panel, the report of all exports or shares available. Figure 13-85 Reporting - Assets - System-wide view Each of the options available under the System-wide view are self explanatory with the possible exception of Monitored Directories. Data Manager can monitor utilization at a directory level as well as a device or filesystem level. However, by default, directory level monitoring is disabled. To enable directory monitoring, define a Directory Group by selecting Data Manager → Monitoring → Groups → Directory, right-click Directory and choose Create Directory Group. The process of setting up Directory Groups is discussed in more detail in 13.3.2, “Groups” on page 535. Once the Directory Group is created it must be assigned to a Scan job, and that job must be run on the systems where the directories to be monitored exist. By setting up a monitored directory you will get additional information for that directory. Note that the information collected includes any subdirectories. Information collected about the directory tree includes the number of files, number of subdirectories, total space used, and average file size. This can be graphed over time to determine space usage patterns. IBM TotalStorage Productivity Center for Data for Databases Asset Reporting for databases is similar to that for filesystems; however, filesystem entities like controllers, disks, filesystems, and shares are replaced with database instances, databases, tables, and data files. 598 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 619. Very specific information regarding an individual database is available as shown in Figure 13-86 for the database DMCOSERV on node COLORADO. Figure 13-86 DMCOSERV database asset details Or you can see rollup information for all databases on a given system (using the System-wide view) as shown in Figure 13-87. Figure 13-87 System-wide view of database assets Chapter 13. Using TotalStorage Productivity Center for Data 599
  • 620. All of the database Asset Reporting options are quite straightforward with the exception of one. In order to receive table level asset information, one or more Table Groups needs to be defined. This is a similar process to that for Directory Groups as described in “System-wide view” on page 598. You would not typically include all database tables within Table Groups, but perhaps either critical or rapidly growing tables. We will set up a group for UDB. To set up a Table Group, Data Manager - Databases → Monitoring → Groups → Table, right-click Table and choose Create Table Group (Figure 13-88). Figure 13-88 Create a new database table group We have entered a description of Colorado Table Group. Now we click New Instance to enter the details of the database and tables that we want to monitor. From the drop down box, we select the database instance, in this case the UDB instance on Colorado. We then enter three tables in turn. For each table, we entered the database name (DMCOSERV), the creator name (db2admin) and a table name. After entering the values, click Add to enter more tables or finish. We entered the table names of BASEENTITY, DMSTORAGEPOOL, and DMVOLUME, as shown in Figure 13-89. Once all of the tables have been entered click OK. 600 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 621. Figure 13-89 Add UDB tables to table group Now we return to the Create Table Group panel, and we see in Figure 13-90 the information about the newly entered tables. Figure 13-90 Tables added to table group Now we Save by clicking the floppy disk icon and when prompted, we entered the Table Group name of ColoradoTableGroup. In order for the information for our tables to be collected, the Table Group needs to be assigned to a Scan job. We will assign it to the default database scan job called Tivoli.Default DB Scan by choosing Data Manager - Databases → Monitoring → Scans → TPCUser.Default Db Scan. Chapter 13. Using TotalStorage Productivity Center for Data 601
  • 622. The definition for this scan job is shown in Figure 13-91 and in particular we see the Table Groups tab. Our new Table Group is shown initially in the left hand pane. We moved it to the right hand pane by selecting it and clicking >>. We then save the updates to the Scan job by choosing File → Save (or with the floppy disk icon from the tool bar). Finally, we can execute the Scan job by right-clicking it and choosing Run Now. Figure 13-91 shows the Scan job definition after the Table Group had been assigned to it. Figure 13-91 Table group added to scan job Example 13-4 is an extract from the Scan job log showing that the table information is now being collected. You can view the Scan job log through the TotalStorage Productivity Center for Data GUI by first expanding the particular Scan job definition. A list of Scan execution reports will be shown; select the one of interest. You may need to right-click the Scan job definition and choose Refresh Job List. The list of Scan executions for the Tivoli.Default DB Scan is shown in Figure 13-92. Figure 13-92 Displaying Scan job list 602 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 623. Once you have the actual job chosen you can click the detail icon for the system that you are interested in to display the job log. The actual file specification of the log file on the Agent system will be displayed at the top of the output when viewed through the GUI. Example 13-4 shows the actual file output. Example 13-4 Database scan job showing table monitoring 09-19 18:01:01 DBA0036I: The following databases-tablespaces will be scanned: MS SQLServer gallium/gallium Databases: master model msdb Northwind pubs tempdb Oracle itsrm Tablespaces: ITSRM.DRSYS ITSRM.INDX ITSRM.RBS ITSRM.SYSTEM ITSRM.TEMP ITSRM.TOOLS ITSRM.USERS 09-19 18:01:01 DBA0041I: Monitored Tables: .CTXSYS.DR$OBJECT Northwind.dbo.Employees Northwind.dbo.Customers Northwind.dbo.Suppliers Finally, we can produce table level asset reports by choosing for example, Data Manager - Databases → Reporting → Asset → System-wide → All DBMSs → Tables → By Total Size. This is shown in Figure 13-93. Figure 13-93 Tables by total size asset report Chapter 13. Using TotalStorage Productivity Center for Data 603
  • 624. 13.11.2 Storage Subsystems Reporting Storage Subsystems Reporting is covered in detail in 13.12, “TotalStorage Productivity Center for Data ESS Reporting” on page 634. 13.11.3 Availability Reporting Availability Reporting is quite simple. Two different sets of numbers are reported - Ping and Computer Uptime. Ping is only concerned with whether or not the system is up and responding to the ICMP requires - it does not care whether the Data Agent is running or not. Ping results are collected by a Ping job, so this must be scheduled to run on a regular basis. See 13.3.4, “Pings” on page 542. Computer Uptime detects whether or not the Data Agent is running. Computer Uptime statistics are gathered by a Probe job so this must be scheduled to run on a regular basis. See 13.3.5, “Probes” on page 545. Figure 13-94 shows the Ping report for our TotalStorage Productivity Center for Data environment, and Figure 13-95 shows the Computer Uptime report. To generate these reports, we had to select the computers of interest and select Generate Report. Figure 13-94 Reports - Availability - Ping 604 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 625. Figure 13-95 Reports - Availability - Computer Uptime 13.11.4 Capacity Reporting Capacity Reporting shows how much storage capacity is installed, and of that capacity, how much is being used and how much is available for future growth. IBM TotalStorage Productivity Center for Data There are four capacity report views within TotalStorage Productivity Center for Data: Disk Capacity Filesystem Capacity Filesystem Used Space Filesystem Free Space However, in reality there are really only two views, or perhaps three. The Filesystem Capacity and Filesystem Used Space views are nearly identical - the only differences being in the order of the columns and the row sort order. And there is relatively little difference between these two views and the Filesystem Free Space view. The Filesystem Capacity and Filesystem Used Space views report on used space, so include columns like percent used space whereas Filesystem Free Space includes columns like percent free space. All other data is identical. Therefore, there are really only two views: a Disk Capacity view and a Filesystem Capacity view. The Disk Capacity view provides information about physical or logical disk devices and what proportion of them has been allocated. Figure 13-96 shows the Disk Capacity by Disk selection window. Chapter 13. Using TotalStorage Productivity Center for Data 605
  • 626. Figure 13-96 Disk capacity report selection window Often there is a one-to-one relationship between devices and filesystems as seen in Figure 13-97, particularly on Windows systems. However, if a single physical disk has two partitions the detailed description will show two partitions at the bottom of the right-hand pane. Figure 13-97 Capacity report - Gallium Disk 0 606 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 627. IBM TotalStorage Productivity Center for Data for Databases Capacity Reporting for databases is very straightforward. You can report on: All databases of any type All databases of a given type on a particular system or group of systems On a specific database Example 13-98 shows a Capacity Report by Computer Group. We actually have databases in just one Computer Group, WindowsDBServers. We then drilled down to see all systems within the WindowsDBServers group, then specifically to node GALLIUM, so that we could see all databases on GALLIUM. Figure 13-98 Database Capacity report by Computer Group 13.11.5 Usage Reporting The reporting categories covered so far have been mostly concerned with reporting at the system or device level. Usage Reporting goes down one more step to report at a level lower than the filesystem. You can produce reports that answer questions such as: How old is my data? When was it created, last accessed, or modified? What are my largest files? What are my largest directories? Do I have any orphan files? Data Manager With Usage Reporting, you will be able to: Identify orphan files and either update their ownership or delete them to free up space Identify the largest files and determine whether they are needed or whether parts of the data could be archived Identify obsolete files so that they can be either deleted or archived Chapter 13. Using TotalStorage Productivity Center for Data 607
  • 628. There are a few restrictions on Usage Reporting: In order to report by directory or by Directory Group you will need to set them up in Data Manager → Monitoring → Groups → Directory. UNIX systems do not record file create dates, so no reporting by creation time is available for these systems. Data Manager for Databases Like database Asset Reporting, all of the database Usage Reporting options are quite straightforward with the exception of table level reporting. From a usage perspective there are two types of table report available: Largest tables Monitored tables We can report on database largest tables by choosing for example, Data Manager - Databases → Reporting → Usage → All DBMSs → Tables → Largest Tables → By RDBMS Type. This report is shown in Figure 13-99. Figure 13-99 Largest tables by RDBMS type 608 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 629. A Monitored Tables by RDBMS Type report is shown in Figure 13-100. In this case, only tables which are part of a Table Group, which is included in a Scan job will be reported on. Figure 13-100 Monitored tables by RDBMS type Chapter 13. Using TotalStorage Productivity Center for Data 609
  • 630. 13.11.6 Usage Violation Reporting Usage Violation Reporting enforces Data Manager Constraints and Quotas. A Constraint is a limit, by file name syntax, on the type of data that can be stored on a system. A Quota is a storage usage limit placed on a user or operating system User Group, and can be defined at the network, computer, or filesystem level. Constraints and Quotas were described in 13.5, “Policy management” on page 565. It is important to remember that Quotas and Constraints are not hard limits - users will not be stopped from working if a Quota or Constraint is violated, but this event will trigger an exception, which will be reported. Data Manager Constraint Violation Reporting There are a number of predefined Constraints in Data Manager. Before we produce a Constraint violation report, we need to set up a new Constraint called forbidden files. Setting up Constraints was described in 13.5.3, “Constraints” on page 570. First navigate Data Manager → Policy Management → Constraints. Existing Constraints will be listed. Right-click Constraints and choose Create Constraint. On the Filesystems tab we entered a description of forbidden files, chose Computer Groups, then selected tpcadmin.Windows Systems and tpcadmin.Windows DB Systems and clicked >>. The completed Filesystems tab is shown in Figure 13-101. Figure 13-101 Create a Constraint - Filesystems tab 610 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 631. We then need to specify in the File Types tab, what a forbidden file is. You can define the criteria as either inclusive or exclusive; that is, you can specify just those files types that will violate the Constraint, or you can specify that all files will violate the Constraint except those specified. There are a number of predefined file types included; you can also chose additional files by entering appropriate values in the “Or enter a pattern field” at the bottom of the form. We have chosen MP3 and AVI files. The completed File Types tab is shown in Figure 13-102. Figure 13-102 Create a Constraint - File Types tab The Users tab is very similar to the File Types tab - you can specify which users should be included or excluded from the selection criteria. We have taken the default, which is to include all users. In the Options tab, we nominate a maximum number of rows to be returned. We can also apply some more specific selection criteria here such as only including files that are larger than a defined size. Note, however that these criteria are added to the file list. For example, if we specified here that we only wanted to include files greater than 1 MB, the search criteria would be changed to ((NAME matches any of ('*.AVI', '*.mp3') AND TYPE <> DIRECTORY) OR SIZE > 1 MB). So the returned list of files would be any file greater than 1 MB in size plus any *.MP3 or *.AVI files. Chapter 13. Using TotalStorage Productivity Center for Data 611
  • 632. If you wish to change the selection criteria so that instead you select any *.MP3 or *.AVI files that are larger than 1 MB, you can enter 1 MB against the bigger than option, and then click the Edit Filter button shown in Figure 13-105. You will then see the file filter as shown in Figure 13-103. To add the size criteria to the file type criteria, click the Size > 1MB entry and drag it up to the All of tag. The changed filter is shown in Figure 13-104. You can also see the Boolean expression for the filter has changed to reflect this condition. Figure 13-103 Edit a Constraint file filter - before change Figure 13-104 Edit a Constraint file filter - after change 612 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 633. In this case we did not want to apply a size criteria, so we left the Option tab entries at their defaults as shown in Figure 13-105. Figure 13-105 Create a Constraint - Options tab Finally, we can specify that we want an Alert generated if a triggering condition is met. The only choice here is to specify a maximum amount of space consumed by the files that meet our selection criteria. We left all of the Alert tab options at their defaults other than specifying an upper limit of 100 MB for files that have met our selection criteria. Chapter 13. Using TotalStorage Productivity Center for Data 613
  • 634. The Alert tab is shown in Figure 13-106. Alerting is covered in more detail in 13.4, “OS Alerts” on page 555. Figure 13-106 Create a Constraint - Alert tab We then clicked the Save button and entered a name of Forbidden Files as shown in Figure 13-107. Figure 13-107 Create a Constraint - save Before we can report against the Constraint, we need to ensure that a Scan job has been run to collect the appropriate information. Once the Scan has completed successfully, you can go ahead and produce Constraint Violation Reports. Note that you cannot produce a report of violations of a particular Constraint - the report will include entries for any Constraint violation. However, once the report is generated, you can drill down into specific Constraint violations. We produced the report by choosing Data Manager → Reporting → Usage Violations → Constraint Violation → By Computer. You will see a screen like Figure 13-108 where you can select a subset of the clients if appropriate - after selecting, click Generate Report. 614 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 635. Figure 13-108 Constraint violation report selection screen You will then see a list of all of those instances of Constraint violations as shown in Figure 13-109. The report shows multiple types of Constraints. Some of these Constraints were predefined (Orphaned File Constraint and Obsolete File Constraint) and others (ALLFILES and forbidden files) we defined. An orphaned file is any file that does not have an owner. This allows you to easily identify files that belonged to users who have left your organization or have had an incorrect ownership set. Figure 13-109 Constraint violations by computer Chapter 13. Using TotalStorage Productivity Center for Data 615
  • 636. From there you can drill down on a specific Constraint, then filesystems within the Constraint, and finally to a list of files that violated the Constraint on that filesystem by selecting the magnifying glass icon next to the entry of interest. Or, as shown in Figure 13-110, by clicking the pie chart icon next to the entry for forbidden files, you can produce a graph indicating what proportion of capacity is being utilized by files violating the Constraint. Position the cursor over any segment of the pie chart to show the percentage and number of bytes consumed by that segment. Figure 13-110 Graph of capacity used by Constraint violating files Constraint violations are also written to the Data Manager Alert Log. Figure 13-111 shows the same list of violations as if you had produced a Constraint Violations by computer report. Figure 13-111 Alert log showing Constraint violations 616 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 637. Quota Violation Reporting The process of producing a Quota violation report is very similar to producing a Constraint violation report, but with some key differences. One difference between Quotas and Constraints is the process of collecting data. For Constraints, the data is collected as part of a standard Scan job in a similar way to adding an additional Profile to a Scan. Quota data collections are performed in a separately scheduled job. So, when you set up a Quota you need to specify scheduling parameters. We set up a Quota rule called Big Windows Users by choosing Data Manager → Policy Management → Quotas → User → Computer, right-clicking Computer and selecting Create Quota. On the Users screen we entered a description of Big Windows Users and then selected User Groups and then TPCUser.Default User Group as show in Figure 13-112. Figure 13-112 Create Quota - Users tab Chapter 13. Using TotalStorage Productivity Center for Data 617
  • 638. On the Computers tab we chose our Windows group: tpcadmin.Windows Systems (Figure 13-113). Figure 13-113 Create Quota - Computers tab We then had to specify when and how often we wanted the Quota job to run. We chose to run the job weekly under the When to CHECK tab as shown in Figure 13-114. Figure 13-114 Create Quota - When to Check 618 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 639. On the Alert tab, shown in Figure 13-115, we accepted all of the defaults other than to specify the limit under User Consumes More Than, in this case, 1 GB. No Alerts will be generated other than to log any exceptions in the Data Manager Alert Log. Figure 13-115 Create Quota - Alert Finally, we save the Quota definition, calling it Big Windows Users as shown in Figure 13-116. Figure 13-116 Create Quota - save Chapter 13. Using TotalStorage Productivity Center for Data 619
  • 640. The new Quota now appears under Data Manager → Policy Management → Quotas → User > Computer as tpcadmin.Big Windows Users (where tpcadmin is our Data Manager username). We right-clicked the Quota and chose Run Now as in Figure 13-117. Figure 13-117 Run new Quota job This job will collect data related to the Quota, and add any Quota Violations to the Alert Log as shown in Figure 13-118. Figure 13-118 Alert Log - Quota violations 620 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 641. We then drilled down on one of the Alerts to see the details (Figure 13-119). Figure 13-119 Alert Log - Quota violation detail And finally we can create a Quota Violation report by choosing Data Manager → Reporting → Usage Violations → Quota Violations → Computer Quotas → By Computer. The high-level report is shown in Figure 13-120. Figure 13-120 Quota violations by computer Chapter 13. Using TotalStorage Productivity Center for Data 621
  • 642. We can then drill down further for additional detail or to produce a graphical representation of the data behind the violation. The graph in Figure 13-121 shows a breakdown of the users’ data by file size. Figure 13-121 Quota violation graphical breakdown by file size Data Manager for Databases Filesystem Usage Violation Reporting includes both Quota and Constraint violations. However, for databases, only Quota violations are available. You can place a Quota on users, user groups, or all users and you can limit the Quota by computer, computer group, database instance, database tablespace group or tablespace. We will set up an Instance Quota that limits any individual user to 100 MB of space per instance for any database on any server in the tpcadmin.WindowsDBServers computer group. To do this, navigate to Data Manager - Databases → Policy Management → Quotas → Instance. Right-click Instance and choose Create Quota. Figure 13-122 shows the Quota definition screen. We entered a description of Big DB Users and selected the TPCUser.Default User Group by expanding User Groups, clicking TPCUser.Default User Group, and then clicking >>. 622 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 643. Figure 13-122 Create database Quota - Users tab On the Instances tab, expand Computer Groups, select tpcadmin.Windows DB Systems and then click >> to add it to the Current Selections as shown in Figure 13-123. Figure 13-123 Create database Quota - Instances tab Chapter 13. Using TotalStorage Productivity Center for Data 623
  • 644. On the When to Run tab shown in Figure 13-124, we chose to run the Quota job weekly and chose a time of day for the job to run. Other values were left at the defaults. Figure 13-124 Create a database Quota - When to Run tab On the Alert tab (shown in Figure 13-125) we specified the actual Quota that we wanted enforced, which was a 100 MB per user Quota. Other values were left as defaults. Figure 13-125 Create a database Quota - Alert tab 624 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 645. We saved the new Quota definition with a name of Big DB Users as shown Figure 13-126. Figure 13-126 Create a database Quota - Save We now run the Quota by right-clicking it and choosing Run Now as seen in Figure 13-127. Figure 13-127 Run the database Quota Chapter 13. Using TotalStorage Productivity Center for Data 625
  • 646. To check if any user has violated the Quota, navigate Data Manager - Databases → Alerting → Alert Log → All DBMSs → All. We see one violation as shown in Figure 13-128. Figure 13-128 DB Quota violation We can also now run a database Quota violation report by choosing Data Manager - Databases → Reporting → Usage Violations → Quota Violations → All Quotas → By User Quota. This report can be seen in Figure 13-129. Figure 13-129 Database Quota violation report 626 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 647. 13.11.7 Backup Reporting Backup Reporting is designed to do two things: It can alert you to situations where files have been modified but not backed up, and it can provide data on the volume of data that will be backed up. Figure 13-130 shows the options that are available for Backup Reporting. Figure 13-130 Backup Reporting options Most at Risk Files Data Manager defines most at risk files as those that are least-recently modified, but have not been backed up. There are some points worth noting about this report: Since the report relies on the archive bit being set to determine whether the file has changed, this report will only work on Windows systems as UNIX systems have no equivalent to the archive bit When using most backup products, once a file has been backed up the archive bit is cleared. Before Version 5.2, IBM Tivoli Storage Manager did not do this, therefore if this level of Tivoli Storage Manager was used, this report would list files that actually may have been backed up. IBM Tivoli Storage Manager Version 5.2 has the ability to reset the Windows archive bit after a successful backup of a file. Chapter 13. Using TotalStorage Productivity Center for Data 627
  • 648. By default, information on only 20 files will be returned. Figure 13-131 shows the selection screen for the report. You will notice that the report uses the Profile TPCUser.Most at Risk. It is in this Profile that the 20 file limit is set, although the value can be changed. You can override the value on the selection screen, but you can only reduce the value here, not increase it. By updating the Profile you can also exclude files from the report. By default, any file in the WINNTsystem* directory tree on any device will be excluded. You can add entries to the exclusion list if appropriate. Ideally, the exclusion list should be the same as that in your backup product. Figure 13-131 Files most at risk report - selection Modified Files Not Backed Up The report provides an aging analysis of your data that has been modified but not backed up. It will show what proportion of the data has been modified within the past 24 hours, between one and seven days, between one week and one month, and so on. Figure 13-132 shows the selection taken in our Windows environment. Like the Most at Risk Files report, this report also relies on the archive bit, so check to see if your backup application uses this. 628 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 649. Figure 13-132 Modified Files not backed up selection To view the report, click Generate report. We choose to view it as a graphic by then clicking the pie icon and selecting Chart: Space Distribution for All. This is shown in Figure 13-133. This chart tells you the amount of space consumed by files have not been backed up since the last backup was run for this server. Figure 13-133 Modified Files not backed up chart overall view Chapter 13. Using TotalStorage Productivity Center for Data 629
  • 650. We can also select Chart: Count Distribution for All as shown in Figure 13-134 to show the number of files in each category. Figure 13-134 Files need backed up chart in detail view The different charts can be viewed in different ways. To select another type of chart, right-click in the chart area, select Customize this chart, and click the radio button next to the desired chart type. Backup Storage Requirements Reporting This option allows you determine how much data would be backed up if you were to perform either a full or an incremental backup. The Full Backup Size option can be used regardless of the OS type and the backup application in use. 630 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 651. In Figure 13-135, the report is run against Windows systems by filesystem. Figure 13-135 Backup storage requirements per filesystem The selection can also run by computer, as shown in Figure 13-136. Figure 13-136 Backup storage requirement per computer and per filesystem Chapter 13. Using TotalStorage Productivity Center for Data 631
  • 652. The Incremental Backup Size option makes use of the archive bit, so it can only be used on Windows systems, and if Tivoli Storage Manager is the backup application, the resetarchiveattribute option must be used (for Version 5.2). A sample report is shown in Figure 13-137. Figure 13-137 Incremental reporting per Node and Filesystem based on files 632 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 653. The third report type here is Incremental Range Sizes Reporting. This does not rely on the archive bit (instead, it uses the modification date) so is more generically applicable. It is possible to show through the use of this report the actual difference between a traditional weekly full/daily incremental backup process versus Tivoli Storage Manager’s progressive incremental approach. To generate this report, select Data Manager → Reporting → Backup → Backup Storage Requirements → Incremental Range Sizes → By Computer as shown in Figure 13-138. Figure 13-138 Incremental Range Size select By Computer Chapter 13. Using TotalStorage Productivity Center for Data 633
  • 654. After you select the Computers of interest, click Generate Report. Figure 13-139 shows the output from this report, with the amount of data changed for different time ranges. Note that the values are cumulative, so for each time range; the values shown include the smaller time periods. Figure 13-139 Incremental Range Sizes Report 13.12 TotalStorage Productivity Center for Data ESS Reporting The reporting capabilities in TotalStorage Productivity Center for Data are expanded in Version 1.2 to include Enterprise Storage Subsystem (ESS) reporting. IBM Tivoli Storage Resource Manager uses Probe jobs to collect information about the ESS. We can then use the reporting facility to view that information. The new subsystem reports show the capacity, controllers, disks, and LUNs of an ESS and their relationships to computers and filesystems within a network. 13.12.1 ESS Reporting For this section we discuss ESS asset and storage subsystem reporting, making references to the ESS lab environment in Figure 13-140 below. Note that the host which accesses the ESS had a TotalStorage Productivity Center for Data Agent installed. This provides the fullest combination of reporting ability for the ESS. If an ESS-attached host does not have a TotalStorage Productivity Center for Data Agent installed, items such as filesystem, logical volume, and device logical names will not be displayed. 634 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 655. Win2k Srv sp3 CIM/OM server w2kadvtsm 172.31.1.135 43p AIX 5.1 ML 4 ESSF20 ITSRM Agent 172.31.1.1 tsmsrv43p 172.31.1.155 2109 Win2k Srv sp3 ITSRM Server w2kadvtsrm 172.31.1.133 Intranet Figure 13-140 ESS reporting lab Prerequisites to ESS Reporting Before doing ESS reporting with Data Manager, the following conditions are required: CIM/OM server successfully installed. Data Manager successfully logs into CIM/OM server. Data Manager successfully runs a discovery and probes the ESS. Important: Refer to Chapter 5, “CIMOM install and configuration” on page 191 and 8.1, “Configuring the CIM Agents” on page 290 for additional details on confirming these prerequisites. Data Manager will run a discovery to locate the CIM/OM server in our environment, which in turn discovers the ESSs. See 8.1.2, “Configuring CIM Agents” on page 290. Creating the ESS Probe IBM Tivoli Storage Resource Manager will then run a Probe to query the discovered ESS. The Probe collects detailed statistics about the storage assets in our enterprise, such as computers, storage subsystems, disk controllers, hard disks, and filesystems. Chapter 13. Using TotalStorage Productivity Center for Data 635
  • 656. Next, we show how to create a Probe for an ESS-F20. Select Probes → Select new probe, then under the Computers tab, choose Storage Subsystems. See Figure 13-141. Figure 13-141 Creating ESS probe On the When to PROBE tab, we selected PROBE Now because we need to populate the backend repository. See Figure 13-142. Figure 13-142 ESS - When to probe 636 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 657. Next is the Alert tab, shown in Figure 13-143. This defines the type of notification for a Probe. Figure 13-143 ESS - Alert tab After all parameters are defined, save the Probe definition. At this point the Probe is submitted and will run immediately. Note: For additional information on creating Probes, see 13.3.5, “Probes” on page 545. There are several ways to check the status of the Probe job. First, we can check the color of the Probe job entry in the navigation tree, then in the content panel. There are two colors that represent job status. They are: GREEN - Job successfully complete with no errors RED - Job completed with errors Chapter 13. Using TotalStorage Productivity Center for Data 637
  • 658. The status of the Probe job is displayed in text and in color, as shown in Figure 13-144, after selecting the Probe job output in the navigation tree. The job at 1:55 pm is in green, indicating success. Figure 13-144 ESS - probe job status We open the Probe job by selecting it and double-clicking the spy glass icon next to the job in the content window. We see the contents of the job, including detailed information on the status, as in Figure 13-145. Here, we have selected the successful Probe. Figure 13-145 Probe job log 638 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 659. Asset Reports - By Storage Subsystem With Asset reporting by storage subsystem, you can view the centralized asset repository that Data Manager constructs during a Probe. The Probe itemizes the information about computers, disks, controllers, and filesystems, and builds a hardware inventory of assets. With the backend repository now populated with DS6000 asset information, we will show how to view reports to display the storage resources. We choose Data Manager → Reporting → Asset → By Storage Subsystem → Tucson DS6000. This report provides specific resource information of the DS6000 and allows us to view storage capacity by a computer, filesystem, storage subsystem, LUN, and disk level. We can also view the relationships between the components of a storage subsystem. Notice that the navigation tree is hierarchical. See Figure 13-146. Figure 13-146 Asset by storage subsystem Chapter 13. Using TotalStorage Productivity Center for Data 639
  • 660. We drill down to the Disk Groups. The disk group contains information related to the ESS, as well as the volume spaces and disks associated with those Disk Groups. Expanding the Disk Group node, a list of all Disk Groups on the ESS displays (Figure 13-147). Figure 13-147 ESS disk group Continuing, we expand the disk group DG1 to view the disks and volume spaces within it. We open Volume Space VS3, which shows the disks and LUNs associated with it. The Disks subsection shows the individual disks associated with the Volume Space (see Figure 13-148). 640 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 661. Figure 13-148 Disks in volume spaces Notice the LUNs subsection for disk DD0105 (Figure 13-149). This shows the LUN to disk relationship. The LUNs shown here are just a subset of all the LUNs. You can see that the LUN is spread across all the displayed disks in the content window. Figure 13-149 Disk and LUN association with volume space Chapter 13. Using TotalStorage Productivity Center for Data 641
  • 662. Figure 13-150 shows the discovery of a disk with no LUN associations. This is known as a hot spare. It can be used when one of the other seven disks in the disk group fails. Figure 13-150 Hot spare LUN 642 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 663. We now show a high level view of all disks in ESSF20. There are 32 disks in the ESS, as shown in Figure 13-146 on page 639 in the Number of Disks field. Figure 13-151 shows a partial listing of the disks. Figure 13-151 ESS all disks Chapter 13. Using TotalStorage Productivity Center for Data 643
  • 664. We can also display a report of all the LUNs in the ESS. This report provides the physical disk association with each LUN. We have a total of 56 LUNs in the ESSF20 as shown in Figure 13-146 on page 639 (number of LUNS). A partial listing is shown in Figure 13-152. Figure 13-152 ESS all LUNs 644 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 665. Storage Subsystem Reporting We now open Reporting → Storage subsystems. Storage Subsystems Reporting allows viewing storage capacity at a computer, filesystem, storage subsystem, LUN, and disk level. By Computer We drill down Computers Views → By Computer. The report displays the association of filesystems to the storage subsystem, LUNS, and disks on ESSF20. These reports are useful for relating computers and filesystems to different storage subsystem components. There are three options available in the Relate Computers to: pull down, as shown in Figure 13-153. Figure 13-153 By Computer - Relate Computer to We select Storage Subsystems from the pull down, select the desired computer and click Generate. Figure 13-154 shows that the generated report TSMSRV43P uses 9.24 GB in the ESS. Figure 13-154 By Computer - storage subsystem Chapter 13. Using TotalStorage Productivity Center for Data 645
  • 666. Returning to the selection screen tab (Figure 13-153 on page 645) we select LUNs. We choose the same host, and click Generate. Figure 13-155 shows the generated report; the relationship between TSMSRV43P and its assigned LUNs. TSMSRV43P has one LUN created on the ESS. Figure 13-155 By Computer - LUNs Finally, from the Selection tab (Figure 13-153 on page 645), we select Disks, our host TSMSRV43P, and click Generate. Figure 13-156 shows the report: the ESS disks assigned to the LUN on the host. Figure 13-156 By Computer - disk 646 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 667. By Filesystem/Logical Volume We will now drill to Computer Views → By Filesystem/Logical Volume. The report displays the association of filesystems to the storage subsystem, LUNS, and disks on ESSF20. These reports are useful for relating computers and filesystems to different storage subsystem components. There are three options available in the Relate Filesystem/Logical Volumes to pull down, shown in Figure 13-157. Figure 13-157 By filesystem/logical volume Select Storage Subsystem, the host (TSMSRV43P), and click Generate. Figure 13-158 shows the filesystems on the host, which are located on the ESS. Figure 13-158 By filesystem/logical volumes - storage subsystem Chapter 13. Using TotalStorage Productivity Center for Data 647
  • 668. From the Selection tab (Figure 13-157 on page 647) we now choose LUNs, the host (TSMSRV43P), and click Generate. Figure 13-159 shows the LUN location of each filesystem on the host. Figure 13-159 By filesystem/logical volume - LUN From the Selection tab (Figure 13-157 on page 647) we now choose Disks, the host (TSMSRV43P), and click Generate. Figure 13-160 shows which disks are comprising each filesystem and logical volume. Figure 13-160 By filesystem/logical volume - Disk 648 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 669. By Storage Subsystem We will now drill down Storage Subsystem Views → By Storage Subsystem. These reports display the relationships of the ESS components (storage subsystems, LUNs, and disks) to the computers and filesystems and logical volumes. There are two options available in the Relate Storage Subsystems to: the pull down, shown in Figure 13-161. Figure 13-161 By Storage Subsystems Select Computers from the pull down, the subsystem ESSF20, and click Generate. Figure 13-162 shows the space used by each host on the storage subsystem. Figure 13-162 By Storage subsystem - Computer Chapter 13. Using TotalStorage Productivity Center for Data 649
  • 670. Now, select Filesystem/logical Volumes from Figure 13-161, the ESSF20 subsystem, and click Generate. Figure 13-163 shows each host’s filesystems and logical volumes, with their capacity and free space. Figure 13-163 By storage subsystem - filesystem/logical volume By LUN Continuing, we drill down Storage Subsystem Views → By LUNs (Figure 13-164). Figure 13-164 By LUNs 650 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 671. Select Computer from the Relate LUNs to: pull down, select the subsystem (ESSF20) with the associated disks (default is all), and click Generate Report. Figure 13-165 shows the LUNs assigned to each host, with the host’s logical name for the LUN (/dev/hdisk1 in this case). Figure 13-165 By LUN - computer Now select Filesystem/Logical Volumes from the Relate LUNS to pull down, the ESSF20 subsystem with associated logical disks (default is all), and click. Next, we clicked Generate Report. Figure 13-166 shows the relationships between the LUNs, computers, and filesystems/logical volumes, including free space and host device logical names. Figure 13-166 By LUNS - filesystem/logical volumes Chapter 13. Using TotalStorage Productivity Center for Data 651
  • 672. Disks Now we drill to Storage Subsystem Views → Disks. There are two options available in the Relate Disks to: pull down, shown in Figure 13-167. Figure 13-167 Disks Select Computer from the pull down, the ESSF20 subsystem with related disks (default is all), and click Generate Report. Figure 13-168 shows the relationships of the disks to the hosts. Figure 13-168 Disks - computer 652 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 673. Now select Filesystem/Logical Volumes from the pull down (Figure 13-167 on page 652), the ESSF20 subsystem with related disks (default is all), and click Generate Report. Figure 13-169 shows the relationship between the ESS disks and the filesystems and logical volumes. Figure 13-169 Disks - filesystem/logical volumes Note: For demonstration purposes, we have reduced some of the fields in the reports. 13.13 IBM Tivoli Storage Resource Manager top 10 reports After analyzing typical customer scenarios, we have compiled the following list of “Top 10 reports” which we recommend running regularly for best practices: ESS used and free storage ESS attached hosts report Computer Uptime Growth in storage used and number of files Incremental backup trends Database reports against DBMS size Database Instance storage report Database reports size by instance and by computer Locate the LUN on which a database is allocated Finding important files on your systems 13.13.1 ESS used and free storage This report shows the free and used storage on an ESS system. To generate this filesystem logical view report, navigate Data Manager → Reporting → Storage Subsystem → Computer Views → By Filesystem/Logical Volumes. Select the computers to report on, and select Disks from the pull-down Relate Filesystems/Logical Volumes To as in Figure 13-170. Chapter 13. Using TotalStorage Productivity Center for Data 653
  • 674. Figure 13-170 ESS relation to computer selected by disk Click Generate Report. The report is shown in Figure 13-171. Various columns are displayed: Storage Subsystem Storage Subsystem Type Manufacturer Model Serial Number Computer Filesystem/Logical Volume Path Capacity Free Space Physical Allocation Figure 13-171 Report for Filesystem/Logical Volumes Part 1 654 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 675. Figure 13-172 shows the right hand columns of the same report. Figure 13-172 Report for Filesystem/Logical Volumes Part 2 This report provides quick answers to how much space on the ESS is allocated to each filesystem. Select LUNs this time from the pull-down in Figure 13-170 on page 654. The report in Figure 13-173 shows the LUN to host mapping for the ESS, which filesystem is associated with each LUN, and the free space. Figure 13-173 Computer view to the filesystem with capacity and free space Chapter 13. Using TotalStorage Productivity Center for Data 655
  • 676. 13.13.2 ESS attached hosts report This report shows which systems are using storage on an ESS. This is useful when ESS maintenance is applied so that the administrators of affected systems can be informed. To generate this report, select Data Manager → Reporting → Storage Subsystem → Computer Views → By Computer tree. We have selected all computers as in Figure 13-174. Figure 13-174 ESS selection per computer Click the Generate Report field - the report is shown in Figure 13-175. Figure 13-175 ESS connections to computer report 656 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 677. Note that you can sort the report on a different column heading by clicking it. The current sort field is indicated by the small pointer next to the field name. Clicking again in the same column reverses the sort order. 13.13.3 Computer Uptime Reporting Uptime is an important IT metric in the enterprise. To generate a Computer Uptime report, select Data Manager → Reporting → Availability → Computer Uptime → By Computer. Select the computers of interest by clicking the Selection... button and checking the boxes next to the desired computers in the Computer Selection window (Figure 13-176) and click OK. Figure 13-176 Computer Uptime Report - computer selection In the Selection window, specify a date range (optional), and click Generate Report, as shown in Figure 13-177. Figure 13-177 Computer Uptime report selection Chapter 13. Using TotalStorage Productivity Center for Data 657
  • 678. For each computer, percent availability, number of reboots, total down time, and average downtime is given, as in Figure 13-178 shows the selection. The default sort order is by descending Total Down Time. Figure 13-178 Computer Uptime report part 1 You can also display this information graphically, by selecting the pie chart icon at the top of the report, as shown in Figure 13-179. Figure 13-179 Computer Uptime report graphical combined (stacked bar) 658 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 679. Figure 13-180 shows an unstacked bar chart of the same information (right-click and select Bar Chart). Figure 13-180 Computer Uptime report graphical (bar chart) 13.13.4 Growth in storage used and number of files The Backup Reporting features of Data Manager also give a convenient way to track the total storage used by files in each computer, as well as the number of files stored. It can be presented graphically, to show historical numbers and future trends. This information helps you plan future storage requirements, be alerted to potential problems, and also (if using a traditional full and incremental backup product), plan your backup server storage requirements, since this report shows the size of a full backup on each computer. Select Data Manager → Reporting → Backup → Backup Storage Requirements → Full Backup Size → By Computer. We used the Profile: TPCUser.Summary By Filesystem/ Directory and selected all computers, as in Figure 13-181. Click Generate Report. Figure 13-181 Generate Full Backup Size report Chapter 13. Using TotalStorage Productivity Center for Data 659
  • 680. Figure 13-182 shows the total disk space used by all the files, and the number of files on each computer. The top column shows the totals for all Agents. Figure 13-182 Select History chart for File count To drill down, select all the computers (using the Shift key) so they are highlighted, then click the pie icon, and select History Chart: Space Usage for Selected. The generated report (Figure 13-183), shows how the total full backup size has fluctuated, and is predicted to change in the future (dotted lines - to disable this, click Hide Trends). Figure 13-183 History Chart: Space Used 660 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 681. To display the file count graph, select History Chart: File count from the pie icon in Figure 13-182. The output report is shown in Figure 13-184, which shows trends in the number of files on each computer. Figure 13-184 History chart: File Count These reports will help you find potential problems (e.g. a computer system that shows an unexpected sudden upward or downward spike) and also predicts disk and backup requirements for the future. 13.13.5 Incremental backup trends This report shows the rate of modification of files, which is very useful for incremental backup planning. Select Data Manager → Reporting → Backup → Backup Storage Requirements → Incremental Range Size → By Filesystem. Select Profile: TPCUser.By Modification as shown in Figure 13-185. Chapter 13. Using TotalStorage Productivity Center for Data 661
  • 682. Figure 13-185 Incremental Range selection based on filespace The generated report shows all the filesystems on the selected computers as in Figure 13-186. Figure 13-186 Summary of all filespace The third column shows the total number and total size of files (for all the systems, then broken down by filesystem). Then there are “Last Modified” columns for one day, one week, one month, two months, three, six, nine, and one year selections. Each of these gives the number and size of the modified files. 662 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 683. To generate charts, highlight all the systems, and click the pie icon. Select Chart: Count Distribution for Selected, as shown in Figure 13-187. Figure 13-187 Selection for Filesystem and computer to generate a graphic The chart is shown in Figure 13-188. Note that when your cursor passes over a bar, a pop-up shows the number of files associated with that bar. Figure 13-188 Bar chart for Incremental Range Size by Filesystem Chapter 13. Using TotalStorage Productivity Center for Data 663
  • 684. You can display other filesystems using the Next 2 and Prev 2 buttons. Change the chart format by right-clicking and selecting a different layout. Figure 13-189 is a pie chart of the same data. The pop-ups work here also. Figure 13-189 Pie chart selected with number of files which have modified With these reports you can track and forecast your backups. You can also display backup behavior for the last one, three, nine, or 12 months. 664 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 685. 13.13.6 Database reports against DBMS size This report shows an enterprise wide view of storage usage by all RDBMS. Select Data Manager - Databases → Reporting → Capacity → All DBMSs → Total Instance Storage → Network-wide and click Generate Report. Figure 13-190 shows a sample output. Figure 13-190 Total Instance storage used network wide This is a quick overview database space consumption across the network. To drill down on a particular RDBMS type, select the appropriate magnifying glass icon as in Figure 13-191. Figure 13-191 DBMS drill down to the computer reports Chapter 13. Using TotalStorage Productivity Center for Data 665
  • 686. The report (Figure 13-192) displays. Figure 13-192 DBMS drill down to the computer result Figure 13-192 shows the fields for an Oracle database. The fields for a DB2 database are as follows: Computer name Total Size Container Capacity Container Free Space Log File Capacity Tablespace Count Container Count Log File Count 666 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 687. 13.13.7 Database instance storage report This report shows storage utilization by database instance. Go to Data Manager - Databases → Reporting → Capacity → UDB → Total Instance Storage → by Instance, select the computer(s) of interest, and click Generate Report. Figure 13-191 shows the result. Figure 13-193 DBMS report Total Instance Storage by Instance Note you could select any RDBMS which is installed in your network. The report shows the following information for each Agent with DB2, plus a total (summary): Computer name RDBMS instance RDBMS type Total size Container Capacity Container free space Log file capacity Tablespace count Container count Log file count 13.13.8 Database reports size by instance and by computer The next report is based on the previous report (database Instance storage report), but in more detail. From the report in Figure 13-193, click the magnifying glass next to a computer of interest. Then do a further drill down on the generated report as in Figure 13-194. Chapter 13. Using TotalStorage Productivity Center for Data 667
  • 688. Figure 13-194 Instance report RDBMS overview Select the computer again, and click the magnifying glass. The report shows the entire DB2 environment running on computer Colorado. We have 10 DB2 UDB databases, shown in Figure 13-195 and Figure 13-196. Figure 13-195 Instance running on computer Colorado first part 668 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 689. Scroll to the right side of the panel (Figure 13-196). Figure 13-196 Instance running on computer Colorado second part Here we can see which databases are running in ARCHIVELOG mode. 13.13.9 Locate the LUN on which a database is allocated This report shows you which disk or LUN is used by a database. Go to Data Manager - Databases → Reporting → Capacity → UDB → Total Instance Storage → By Instance, select the Agent(s) of interest, then click Generate Report. Figure 13-197 shows the result. Figure 13-197 LUN report selection for a database Chapter 13. Using TotalStorage Productivity Center for Data 669
  • 690. Select an Agent, and click the magnifying glass to drill down. Figure 13-198 displays. The report shows the following columns: File Type Path File Size Free Space Auto Extend of an File Figure 13-198 Database select File and Path 670 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 691. Select now a particular data file, and click the magnifying glass. The generated pie chart is shown in Figure 13-199. We can see this data file is allocated on the C: drive. Figure 13-199 Report DB2 File in a Pie Chart for DB2 File Click the View Logical Volume button at the bottom to display the LUN report (Figure 13-200). Figure 13-200 LUN information Chapter 13. Using TotalStorage Productivity Center for Data 671
  • 692. Using this procedure, we can find the LUNs where all the database data files are stored. This information is useful for a variety of purposes, e.g. for performance planning, availability planning, and assessing the impact of a LUN failure. 13.13.10 Finding important files on your systems This report generates a search for specific files over all computers managed by a Data Manager Server. As an example, we created a text file each on Lochness and Wisla called lochness.txt and wisla.txt, respectively. We have chosen this search for all machines because it will return a relatively small number of results; however, any search criteria could be used. The task requires a number of steps: 1. Define new Profile 2. Bind new Profile into a Scan 3. Generate a Report with your Profile 4. Define new Constraint 5. Generate a Report to find defined Constraint 1. Define the new Profile First create the Profile - Data Manager → Monitoring → Profiles, right-click, and select Create Profile. Fill out the description field accordingly, and check the Summarize space usage by, Accumulate history, and Gather information on the fields as desired. In the bottom half click size distribution of files, as shown in Figure 13-201. Figure 13-201 Create Profile for own File search 672 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 693. Now select the File Filter tab. Click in the All files selected area and right-click to create a new condition, as shown in Figure 13-202. Figure 13-202 Create new Condition Enter the desired file pattern into the Match field, and click Add to bring the condition to the display window below, as in Figure 13-203. You can select from different conditions: – Matches any of – Matches none of – Matches – Does not match When you have finished the condition, click OK. In our case we are matching Tivoli Storage Manager option files. Figure 13-203 Create Condition add Chapter 13. Using TotalStorage Productivity Center for Data 673
  • 694. Figure 13-204 shows our newly created Condition. Figure 13-204 Saved Condition in new Profile Now save the new Profile with an appropriate name, (in this instance, Search for files). The saved Profile now appears in the Profiles list, see Figure 13-205 on page 675. Tip: We recommend choosing meaningful Profile names, which reflect the content or function of the profile. 674 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 695. Figure 13-205 Listed Profiles containing Search for files 2. Bind new Profile into a Scan. First, create a new Scan - Data Manager → Monitoring → Scans. We chose TPCUser.Default Scan as shown in Figure 13-206 on page 675. Fill in a description for this Scan and select the Filesystems and Computers on which the Scan will run. Figure 13-206 Add Profile to Scan Chapter 13. Using TotalStorage Productivity Center for Data 675
  • 696. On the Profiles tab, select the newly created Profile and add it to the Profiles to apply to Filesystems column, as shown in Figure 13-207. Figure 13-207 Add Profiles to apply to filesystems Now select the schedule time when the schedule should run, save the Scan, then check the result. 3. Generate Report with your Profile. To view the results, select Data Manager → Reporting → Usage → Files → File Size Distribution → By Filesystem. Select all filesystems you which to report, select the Profile: administrator (Figure 13-208), and click Generate Report. The report contains all the option files discovered by the Scan as in Figure 13-209. 676 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 697. Figure 13-208 Select Profile: tpcadmin.Search for files Figure 13-209 Report with number of found Search for files Note that on LOCHNESS and WISLA C drive we found 1 file each. Chapter 13. Using TotalStorage Productivity Center for Data 677
  • 698. 4. Define new Constraint. We would like to know where specifically these files are located. To set up this search, select Data Manager → Policy Management → Constraints → TPCUser.Orphaned File Constraint, as shown in Figure 13-210. Enter a description, and select the Filesystem Groups and Computers where you want to locate the files. Figure 13-210 Create Orphaned File search Select the Options tab, then select Edit Filter as shown in Figure 13-211. Figure 13-211 Update the Orphaned selection 678 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 699. On the Edit Filter pop-up, double click the ATTRIBUTES Filter. Here we will replace the ORPHANED condition with our own filter, since we want to actually search for the text files we created, not orphaned files (Figure 13-212). Figure 13-212 Update the selection with own data Use the Del button to delete the ORPHANED condition, then select NAME from the Attributes pull-down, and the Add button to add another Attributes condition. We will specify to search for the text files we created, as in Figure 13-213. Figure 13-213 Enter the file search criteria Chapter 13. Using TotalStorage Productivity Center for Data 679
  • 700. After each file pattern entry, click Add to save it. When all search arguments are entered, click OK to save the search. The selection is now complete as in Figure 13-214. Figure 13-214 File Filter selection reconfirm Click OK again. Save the search with a new description and name (File → Save As), so that you do not overwrite the original TPCUser.Orphaned File Constraint. We called saved the file as “File search.” Finally, we run the Scan and check the Scan job log for correct execution, as shown in Figure 13-215. Figure 13-215 Scan log check 680 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 701. 5. Generate Report to find defined Constraint Now look for the results of the file name search. Select Data Manager → Reporting → Usage Violations → Constraint Violations → By Computer, select all computers and generate the report. The report will present a summary as in Figure 13-216. Figure 13-216 Summary report of all Tivoli Storage Manager option files To drill down, click the magnifying glass on WISLA as in Figure 13-217. This shows all the filesystems on WISLA where matching files were found. Figure 13-217 File selection for computer WISLA Chapter 13. Using TotalStorage Productivity Center for Data 681
  • 702. Click the magnifying class on a filesystem (C drive, in this case). This will show all the files found which matched the pattern, as in Figure 13-218. Note that there was 1 file reported, which matches the summary view given in Figure 13-209 on page 677. Figure 13-218 Report for Tivoli Storage Manager Option file searched You can also drill down to individual files, for detailed information as in Figure 13-219. Figure 13-219 File detail information 682 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 703. 13.14 Creating customized reports Customized Reporting within Data Manager is done through the My Reports option, which is available for both Data Manager and Data Manager for Databases. There are three main options available within My Reports: System Reports Reports owned by username Batch Reports System Reports, while included here in the customized reporting section, is in fact not customizable currently. We will still discuss it in this section as it is part of the My Reports group. Reports owned by username’s Reports, where username is the currently logged in Data Manager username, are modified versions of standard reports from the Reporting option. You will only see reports here that you have modified and saved. Batch Reports are reports that are typically set up to run on a schedule, although they can be run interactively. The key difference between Batch Reports and other reporting options is that with Batch Reports, the output will always be written to an output file rather than displayed on the screen. 13.14.1 System Reports These reports can, at this point in time at least, only be run as is. You cannot modify the parameters in any way, nor can you add additional reports to the list. These reports provide the same information than is available from running reports from the Reporting option. The intent of these reports is to provide frequently needed information, which can be provided quickly and repetitively without having to reenter parameters. Chapter 13. Using TotalStorage Productivity Center for Data 683
  • 704. Data Manager Figure 13-220 shows the available System Reports for Data Manager. Figure 13-220 My Reports - System Reports Figure 13-221 shows the output from running the Storage Capacity system report. We could have generated exactly the same output by selecting Data Manager → Reporting → Capacity → Disk Capacity → By Computer → Generate Report. Obviously, selecting Data Manager → My Reports → Storage Capacity is a lot simpler. Figure 13-221 My Reports - Storage Capacity 684 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 705. Data Manager for Databases The System Reports available for Data Manager for Databases are shown in Figure 13-222. While there are quite a few reports available, they fall into three main categories: Database storage by database Database storage by user Database freespace The only report that does not fall into one of those categories is a usage violation report. Figure 13-222 shows the output from the All Dbms - User Database Space Usage report. We are not so much interested in the report contents as such here, but rather in the fact that when the report was run it produced a report for all users. You can go back to the selection tab and select specific users if required. This capability exists for all of the System Reports. Figure 13-222 Available System Reports for databases Chapter 13. Using TotalStorage Productivity Center for Data 685
  • 706. 13.14.2 Reports owned by a specific username In concept this option is very similar to System Reports. You can include here those reports that you need to run regularly, consistently and easily. The difference, compared to System Reports, is that you get to decide what reports are included and what they look like. However, it is important to remember that you will only see those reports that have been created by the currently logged in TotalStorage Productivity Center for Data username. Data Manager We will define a report here for tpcadmin, the username that we are currently logged in as. We will create a report that is exactly the same as the Storage Capacity system report as shown in Figure 13-221 on page 684. In practice this is not something you would normally do as a report already exists. However, this will demonstrate more clearly how the options relate to each other. We select Data Manager → Reporting → Capacity → Disk Capacity → By Computer and click Generate Report. Once the report is produced, we save the report definition, using the name My Storage Capacity. This is shown in Figure 13-223. Figure 13-223 Create My Storage Capacity report 686 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 707. Once the report is saved you will see it available under username’s Reports for tpcadmin as shown in Figure 13-224. There are a few features of saved reports worth mentioning here. Firstly, characteristics such as sort order are not saved with the report definition; however, selection criteria are saved. Secondly, you can override the selection criteria when running your report. By default the objects selected at the time of the save only will be reported. However, you can use the Selection tab when running the saved report to include or exclude objects from the report. If you change selection criteria you can resave the report, or save it under another name to update the definition or create a new definition respectively. Figure 13-224 My Storage Report saved Chapter 13. Using TotalStorage Productivity Center for Data 687
  • 708. Data Manager for Databases Database Reports created for specific users, in this case tpcadmin, are set up the same as in Data Manager. We will show one brief example here. We will take one of the reports that we created earlier in our discussion on Reporting (in this case Figure 13-100 on page 609) the Monitored Tables by RDBMS Type report and set it up to be able to run more easily. First we run the report by choosing Data Manager - Databases → Reporting → Usage → All DBMSs → Tables → Monitored Tables → By RDBMS Type and click Generate Report. We then saved the report definition, naming it Monitored Tables by RDBMS Type. This is shown in Figure 13-225. Figure 13-225 Monitored Tables by RDBMS Types customized report The report is more easily run now by choosing IBM Tivoli SRM for Databases → My Reports → username’s Reports → Monitored Tables by RDBMS Type. 13.14.3 Batch Reports In this section we will show how we set up some Batch Reports. All of the reports were set up in the same way so we will use only one as an example. The process is the same whether the report is for Data Manager or Data Manager for Databases. 688 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 709. Data Manager To set up a new report Data Manager → My Reports → Batch Reports right-click Batch Reports and select Create Batch Report. You will then see the screen shown in Figure 13-226. Figure 13-226 Create a Batch Report Now, it is a simply a matter of specifying what has to be reported, plus when and what the output should be. In this case we are going to create a system uptime report. As shown in Figure 13-227, we entered our report description of System Uptime and have then selected Availability → Computer Uptime → By Computer and clicked >>. Our selection is then moved into the right hand panel, Current Selections. Figure 13-227 Create a Batch Report - report selection Chapter 13. Using TotalStorage Productivity Center for Data 689
  • 710. We then selected the Selection tab, which is shown in Figure 13-228. Here we are able to select a subset of available data by either reporting for a specified time range or a subset of available systems. We took the defaults here. Figure 13-228 Create a Batch Report - selection On the Options tab, we specified that the report should be executed and generated on the Agent called COLORADO, which is our Data Manager server. We selected HTML for Report Type Specification and then changed the rules for the naming of the output file under Output File Specification. By default the name will be {Report creator}.{Report name}.{Report run number}. In this case we do not really care who created the report and having a variable like report run number, which changes every time a new version of the report is created and makes it difficult to access the file from a static Web page. So we changed the report name to be {Report name}.html. The report will be created in <install-directory>logData-agent-namereports on the Agent system where the report job is executed. There is no ability to override the directory name. For example, C:Program FilestivoliepsubagentsTPCDatalogcoloradoreports on our Windows 2000 Data Manager server COLORADO or /usr/tivoli/tsrm/log/brazil/reports on an AIX Data Manager Agent called BRAZIL. The Option tab is shown in Figure 13-229. Note here that it possible to run a script after the report is created to perform some type of post-processing. For example, you might need to copy the output file to another system if your Web server is on a system that is not running a Data Manager Agent. 690 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 711. Figure 13-229 Create a Batch Report - options On the When to REPORT tab we specified when the report should be generated. We chose REPORT Repeatedly and then selected a time early in the morning (3:00 AM) and specified that the report should be generated every day. This is shown in Figure 13-230. Figure 13-230 Create a Batch Report - When to REPORT Chapter 13. Using TotalStorage Productivity Center for Data 691
  • 712. We left the Alert tab options as default, but it is possible to generate an Alert through several mechanisms including e-mail, an SNMP trap, or the Windows event log should the generation of the report fail. Finally, we saved the report, calling it System Uptime, as shown in Figure 13-231. Figure 13-231 Create a Batch Report - saving the report 692 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 713. Data Manager for Databases We will use the same example here as we used in Figure 13.14.2 on page 686, that is a Monitored Tables by RDBMS Type, but here we will save it in HTML format. We choose Data Manager - Databases → My Reports → Batch Reports, right-click Batch Reports and select Create Batch Report as shown in Figure 13-232. Figure 13-232 Create a database Batch Report Figure 13-233 shows the Report tab. We expanded in turn Usage → All DBMS’s → Tables → Monitored Tables → By RDBMS Type and clicked >>. We also entered a Description of Monitored Tables by RDBMS Type. Chapter 13. Using TotalStorage Productivity Center for Data 693
  • 714. Figure 13-233 Create a database Batch Report - Report tab We accepted the defaults on the Selection tab, which is to report on all RDBMS types and then went to the Options tab, shown in Figure 13-234. We set the Agent computer, which will run the report to COLORADO. Note that the system that you run the report on must be licensed for each type of database that you are reporting on. If we were to run the report on COLORADO, the Data Manager server system, we would need to have the Data Manager for Databases licences for Oracle and SQL-Server licences loaded there even though COLORADO does not run these databases. We also set the report type to HTML and changed the output file name to be {Report name}.html. This is shown in Figure 13-234. 694 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 715. Figure 13-234 Create a database Batch Report - Options tab On the When to Report tab, shown in Figure 13-235, we chose REPORT Repeatedly and set a start time. Figure 13-235 Create a database Batch Report - When to Report tab We did not change anything in the Alert tab. We saved the definition with the name Monitored Tables by RDBMS Type as shown in Figure 13-236. Chapter 13. Using TotalStorage Productivity Center for Data 695
  • 716. Figure 13-236 Create a database Batch Report - save definition We can now run the report by choosing Data Manager - Databases → My Reports → Batch Reports and then right-clicking tpcadmin.Monitored Tables by RDBMS Type and choosing Run Now. Figure 13-237 shows the output from the report execution. Figure 13-237 Monitored Tables by RDBMS Type batch report output 696 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 717. 13.15 Setting up a schedule for daily reports Data Manager can produce reports according to a schedule. In our lab environment, we set up a number of Batch Reports as shown in Figure 13-238. Note that the name of each of the reports is prefixed by tpcadmin. This is the Windows username that we used to log into Data Manager. Even though the reports were created by a particular user, other Data Manager administrative users still have access to the reports (Data Manager non-administrative users can only look at the results). It is possible to generate output from Batch Reports in various formats including HTML,CSV, (comma separated values) and formatted reports. For all of the reports that we set up, we specified HTML as the output type, and also set them to run on a daily schedule. That way it is easy to use a browser to quickly look at the state of the organization’s storage. It also means that anyone can look at the reported data through their browser, without having access to, or indeed, knowing how to use Data Manager. Obviously, if unrestricted access to this data was not desirable some sort of password based security could be included within the Web page. Currently, all of the HTML output from Batch Reports is in table format - graphs cannot be produced. There is also no ability to affect the layout of the reports in terms of sort order, nominating the columns to be displayed or the column size. Using the interactive reporting capability of the product does allow graphs to be produced and gives you some additional capability in determining what the output looks like. To go further than that you can export to a CSV file, and then use a tool such as Lotus 1-2-3® or Microsoft Excel to manipulate the output. Figure 13-238 Batch Reports listing The next section shows how to develop the Web site. Chapter 13. Using TotalStorage Productivity Center for Data 697
  • 718. 13.16 Setting up a reports Web site Since Data Manager can easily generate reports in HTML format, it is a logical extension to set up a Web site where the reports can be easily viewed. Since Data Manager itself is easy to install and use, we likewise took a fairly simplistic view to creating the Web site. We used the Microsoft Word Web Page Wizard to create the basic layout of the page as shown in Figure 13-239. The main page has two frames. In the left hand frame we have created links to each of the report files. The right hand frame is where the reports are displayed. As additional Batch Reports are needed, it is a relatively simple process of editing the HTML source and including another hot link. Obviously, this could be made more sophisticated. An example would be to have the browser list all HTML files within the report directory. Figure 13-239 MS Word created Web page We then used the Virtual Directory Creation Wizard within Microsoft Internet Information Server (IIS) to set up access to the reports as shown in Figure 13-240. Detailed information on using IIS is shown in 8.2.2, “Using Internet Information Server” on page 299. 698 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 719. Figure 13-240 Setting up a Virtual Directory within IIS We could then access the reports through a Web browser as shown in Figure 13-241. Figure 13-241 Reports available from a Web browser Chapter 13. Using TotalStorage Productivity Center for Data 699
  • 720. 13.17 Charging for storage usage Through the Data Manager for Chargeback product, Data Manager provides the ability to produce Chargeback information for storage usage. The following items can have charges allocated against them: Operating system storage by user Operating system disk capacity by computer Storage usage by database user Total size by database-tablespace For each of the Chargeback by user options, a Profile needs to be specified. Profiles are covered in “Probes” on page 527. Data Manager can directly produce an invoice or create a file in CIMS format. CIMS is a set of resource accounting tools that allow you to track, manage, allocate, and charge for IT resources and costs. For more information on CIMS see: http://www.cims.com. Figure 13-242 shows the Parameter Definition screen. The costs allocated here do not represent any real environment, but represent an example, based on these assumptions: Disk hardware costs, including controllers and switches. is $0.50 per MB Hardware costs are only 20% of the total cost over the life of the storage = $2.50 /MB On average only 50% of the capacity is used = $5.00 /MB used The expected life of the storage is 4 years - $5.00 /48 = 0.1042 /MB /month The figures used are for monthly Chargeback Chargeback is for cost recovery only, no profit Figure 13-242 Chargeback parameter definition In this example we have chosen to perform Chargeback by computer. It is possible to separately charge for database usage and use a different rate from the computer rate. To do this you would need to set up a Profile that excluded the database data, otherwise, it would be counted twice. 700 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 721. Chargeback is useful, even if you do not actually collect revenue from your users for the resources consumes. It is a very powerful tool for raising the awareness within the organization of the cost of storage, and the need to have the appropriate tools and processes in place to manage storage effectively and efficiently. Figure 13-243 shows the Chargeback Report being created. Currently, it is not possible to have the Chargeback Report created automatically (that is, scheduled). Figure 13-243 Create the Chargeback Report Example 13-5 shows the Chargeback Report that was produced. Example 13-5 Chargeback Report Data Manager - Chargeback page 1 Computer Disk Space Invoice Aug 23, 2005 tpcadmin.Linux Systems NAME SPACE COST GB 0.104/GB klchl5h 0 0.00 group total 0 0.00 Data Manager - Chargeback page 2 Computer Disk Space Invoice Aug 23, 2005 tpcadmin.Windows DB Systems NAME SPACE COST GB 0.104/GB colorado 69 7.19 senegal 0 0.00 group total 69 7.19 Data Manager - Chargeback page 3 Computer Disk Space Invoice Aug 23, 2005 Chapter 13. Using TotalStorage Productivity Center for Data 701
  • 722. tpcadmin.Windows Systems NAME SPACE COST GB 0.104/GB gallium 59 6.15 lochness 75 7.82 wisla 75 7.82 group total 209 21.79 Data Manager - Chargeback page 4 Computer Disk Space Invoice Aug 23, 2005 TPCUser.Default Computer Group NAME SPACE COST GB 0.104/GB Cluster Group.DB2CLUSTER.ITSOSJNT 137 14.28 group total 137 14.28 Data Manager - Chargeback page 5 Run Summary Aug 23, 2005 Computer Disk Space Invoice 415 GB 43.26 run total 43.26 Example 13-6 shows the Chargeback Report in CIMS format. Example 13-6 Chargeback Report in CIMS format TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Linux Systems,klchl5h,1,0 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows DB Systems,colorado,1,71687000 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows DB Systems,senegal,1,0 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows Systems,gallium,1,61762720 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows Systems,lochness,1,78156288 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows Systems,wisla,1,78156288 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,TPCUser,Default Computer Group,Cluster Group.DB2CLUSTER.ITSOSJNT,1,142849536 702 IBM TotalStorage Productivity Center V2.3: Getting Started
  • 723. 14 Chapter 14. Using TotalStorage Productivity Center for Fabric In this chapter we provide an introduction to the features of TotalStorage Productivity Center for Fabric. We discuss the following topics: IBM Tivoli NetView navigation overview Topology view Data collection, reporting, and SmartSets © Copyright IBM Corp. 2005. All rights reserved. 703
  • 724. 14.1 NetView navigation overview Since TotalStorage Productivity enter for Fabric (formerly Tivoli SAN Manager) uses IBM Tivoli NetView (abbreviated as NetView) for display, before going into further details, we give you a basic overview of the NetView interface, how to navigate in it and how TotalStorage Productivity Center for Fabric integrates with NetView. Detailed information on NetView is in the redbook Tivoli NetView V6.01 and Friends, SG24-6019. 14.1.1 NetView interface NetView uses a graphical interface to display a map of the IP network with all the components and interconnect elements that are discovered in the IP network. As your Storage Area network (SAN) is a network, TotalStorage Productivity Center for Fabric uses NetView and its graphical interface to display a mapping of the discovered storage network. 14.1.2 Maps and submaps NetView uses maps and submaps to navigate in your network and to display deeper details as you drill down. The main map is called the root map while each dependent map is called a submap. Your SAN topology will be displayed in the Storage Area Network submap and its dependents. You can navigate from one map to its submap simply by double-clicking the element you want to display. 14.1.3 NetView window structure Figure 14-1 shows a basic NetView window. submap window submap stack child submap area Figure 14-1 NetView window 704 IBM TotalStorage Productivity Center: Getting Started
  • 725. The NetView window is divided into three parts: The submap window displays the elements included in the current view. Each element can be another submap or a device The submap stack is located on the left side of the submap window. This area displays a stack of icons representing the parent submaps that you have already displayed. It shows the hierarchy of submaps you have opened for a particular map. This navigation bar can be used to go back to a higher level with one click The child submap area is located at the bottom of the submap window. This submap area shows the submaps that you have previously opened from the current submap. You can open a submap from this area, or bring it into view if it is already opened in another window on the window. 14.1.4 NetView Explorer From the NetView map based window, you can switch to an Explorer view where all maps, submaps and objects are displayed in a tree scheme (similar to the Microsoft Windows Explorer interface). To switch to this view, right-click a submap icon and select Explore as shown in Figure 14-2. Figure 14-2 NetView Explorer option Chapter 14. Using TotalStorage Productivity Center for Fabric 705
  • 726. Figure 14-3 shows the new display using the NetView Explorer. Figure 14-3 NetView explorer window From here, you can change the information displayed on the right pane by changing to the Tivoli Storage Area Network Manager view on the top pull-down field. The previously displayed view was System Configuration view. The new display is shown in Figure 14-4. Figure 14-4 NetView explorer window with Tivoli Storage Area Network Manager view 706 IBM TotalStorage Productivity Center: Getting Started
  • 727. Now, the right pane shows Label, Name, Type and Status for the device. You may scroll right to see additional fields. 14.1.5 NetView Navigation Tree From the any NetView window, you can switch to the Navigation Tree by clicking the tree icon circled on Figure 14-5. Figure 14-5 NetView toolbar NetView will display, with a tree format, all the objects contained in the maps you have already explored. Figure 14-6 shows the tree view. Figure 14-6 NetView tree map You can see that our SAN — circled in red — does not show its dependent objects since we have not yet opened this map through the standard NetView navigation window. You can click any object and it will open its submap in the standard NetView view. 14.1.6 Object selection and NetView properties To select an object, right-click it. NetView displays a context-sensitive menu with several options including Object Properties as shown in Figure 14-7. Chapter 14. Using TotalStorage Productivity Center for Fabric 707
  • 728. Figure 14-7 NetView objects properties menu The Object Properties for that device will display (Figure 14-8). This will allow you to change NetView properties such as the label and icon type of the selected object. Figure 14-8 NetView objects properties Important: As TotalStorage Productivity Center for Fabric runs its own polling and discovery processes and only uses NetView to display the discovered objects, each change to the NetView object properties will be lost as soon as TotalStorage Productivity Center for Fabric regenerates a new map. 708 IBM TotalStorage Productivity Center: Getting Started
  • 729. 14.1.7 Object symbols TotalStorage Productivity Center for Fabric uses its own set of icons as shown in Figure 14-9. Two new icons have been added for Version 1.2 - ESS and SAN Volume Controller. Figure 14-9 Productivity Center for Fabric icons 14.1.8 Object status The color of a symbol or the connection represents its status. The colors used by Productivity Center for Fabric and their corresponding status are shown in Table 14-1. Table 14-1 Productivity Center for Fabric symbols color meaning Symbol color Connection color Status Status meaning Green Black Normal The device was detected in at least one of the scans Green Black New The device was detected in at least one of the scans and a new discovery has not yet been performed since the device was detected Yellow Yellow Marginal (suspect) Device detected - the status is impaired but still functional Red Red Missing None of the scans that previously detected the device are now reporting it IBM Tivoli NetView uses additional colors to show the specific status of the devices, however these are not used in the same way by Productivity Center for Fabric (Table 14-2). Table 14-2 IBM Tivoli NetView additional colors Symbol color Status Status Meaning Blue Unknown Status not determined Wheat (tan) Unmanaged The device is no longer monitored for topology and status changes. Dark green Acknowledged The device was Missing, Suspect or Unknown. The problem has been recognized and is being resolved Gray (used in NetView Unknown Status not determined Explorer left pane) If you suspect problems in your SAN, look in the topology displays for icons indicating a status of other than normal/green. To assist in problem determination, Table 14-3 provides an overview of symbol status with possible explanations of the problem. Chapter 14. Using TotalStorage Productivity Center for Fabric 709
  • 730. Table 14-3 Problem determination Display Agents Device Link Non-ISL ISL explanation explanation Any Normal Marginal One or more, but One or more, but (green) (yellow) not all links to the not all links device in this between the two topology are switches is missing missing. Any Normal Critical All links to the All links between (green) (red) device in this the two switches topology are are missing, but missing, while other the out-of-band links to this device communication to in other topologies the switch is are normal. normal Any Critical Critical All links to the All links between (red) (red) device in this the two switches topology are are missing, and missing, while all the out-of-band other links to communication to devices in other the switch is topologies are missing or missing (if any) indicates that the switch is in critical condition Both Critical Normal All in-band agents This condition (red) (black) monitoring the should not happen. device can no If you see this on longer detect the an ISL where device. For switches on either example, a server side of the link reboot, power-off, have an shutdown of agent out-of-band agent service, Ethernet connected to your problems, and SAN Manager, soon. then you are having problems with your out-of-band agent. Both Critical Marginal At least one link to This condition (red) (yellow) the device in this should not happen. topology is normal If you see this on and one or more an ISL where links are missing. In switches on either addition, all in-band side of the link agents monitoring have an the device can no out-of-band agent longer detect the connected to your device SAN Manager, then you are having problems with your out-of-band agent. 710 IBM TotalStorage Productivity Center: Getting Started
  • 731. 14.1.9 Status propagation Each object has a color representing its status. If the object is an individual device, the status shown is that of the device. If the object is a submap, the status shown reflects the summary status of all objects in its child submap. Status of lower level objects is propagated to the higher submap as shown in Table 14-4. Table 14-4 Status propagation rules Object status Symbols in the child submap Unknown No symbols with status of normal, critical, suspect or unmanaged Normal All symbols are normal or acknowledged Suspect (marginal) All symbols are suspect or Normal and suspect symbols or Normal, suspect and critical symbols Critical At least one symbol is critical and no symbol are normal 14.1.10 NetView and Productivity Center for Fabric integration Productivity Center for Fabric adds a SAN menu entry in the IBM Tivoli NetView interface, shown in Figure 14-10. The SAN pull-down menu contains the following entries: SAN Properties to display and change object properties, such as object label and icon Launch Application to run a management application ED/FI Properties to view ED/FI events ED/FI Configuration to start, stop, and configure ED/FI Configure Agents to add and remove agents Configure Manager to configure the polling and discovery scheduling Set Event Destination to configure SNMP and TEC events recipients Storage Resource Manager to launch TotalStorage Productivity Center for Data Help Figure 14-10 SAN Properties menu Chapter 14. Using TotalStorage Productivity Center for Fabric 711
  • 732. All those items will subsequently be described in more detail. 14.2 Walk-through of Productivity Center for Fabric This section takes you through the Productivity Center for Fabric. It steps through different views to help you understand how to use different panels. To anyone familiar with NetView, this is similar. Productivity Center for Fabric uses NetView to display your SAN along with your IP network. In the first view, you see three icons: IP Internet, SmartSets, and SAN. We focus on the SAN icon. Figure 14-11 shows the root display window when you first launch NetView. The green background on the SAN icon indicates that all is well in that environment. Figure 14-11 NetView root display 712 IBM TotalStorage Productivity Center: Getting Started
  • 733. There are three different types of views in Productivity Center for Fabric: Device Centric view, Host Centric view, and SAN view. In our configuration, the NetView display (Figure 14-12) shows two separate SANs that we are monitoring: TPC SAN and TSM SAN. Figure 14-12 SAN view 14.2.1 Device Centric view The first view is the Device Centric view. From this view, you can drill down to see the device point of view. In this example, we have a view of two IBM FAStT devices. The one we are using is labeled FAStT-1T14859668. We drill down on that device to see which systems are using LUNs from the FAStT. Here we see that two LUNs are available. As we drill down on LUN1, we see that a host named PDQDISRV has been assigned that LUN. If we go further, we can see that this system is a Windows 2000 system. Chapter 14. Using TotalStorage Productivity Center for Fabric 713
  • 734. 14.2.2 Host Centric view Now we investigate the Host Centric view. Note: Only the systems that have the Productivity Center for Fabric agent installed on them are displayed in this view. The Host Centric view displays all host systems and their logical relationships to local and SAN-attached devices. Here again we see a system called PQDISRV. If we drill down on this system, we can see that this is a Windows 2000 system that has four file systems defined on it. We can also look at the properties of those file systems. This enables us to see such information as the type of file system, mount point, total amount of space and how much free space is available. As we drill down further, we can see the logical volume or volumes behind those file systems. 14.2.3 SAN view The SAN view displays one symbol for each SAN. You can see from Figure 14-12 on page 713 that there are two SANs. When we double-click the SAN icon labelled TPC SAN, we see the underlying submap (Figure 14-13). From the submap, you can choose either the Topology View or Zone View. Figure 14-13 SAN subview 714 IBM TotalStorage Productivity Center: Getting Started
  • 735. First we explore the Zone View. The Zone View displays information by zone groupings. Figure 14-14 displays the information about the three zones that have been setup on the Fibre Channel switch: the Colorado, Gallium and PQDI zones. Figure 14-14 Zone View Chapter 14. Using TotalStorage Productivity Center for Fabric 715
  • 736. We can drill down in each zone and see which system and devices have been assigned to that specific zone. Figure 14-15 shows the Colorado zone in which there is one host and a FAStT disk subsystem. Figure 14-15 Colorado zone contents 716 IBM TotalStorage Productivity Center: Getting Started
  • 737. Now we look at the Topology View (Figure 14-16). The Topology View draws a picture of how the SAN is configured, which devices are connected to which ports, and so on. As we drill down in the Topology View, we first see the interconnect elements. This shows you the connection between any switches. In our small environment, we have only one switch, so the only device connected is the itsosw4 switch, which is an IBM 2109-F16 switch. Figure 14-16 Topology View of switches Chapter 14. Using TotalStorage Productivity Center for Fabric 717
  • 738. If we had two switches in our SAN, we would see a switch icon on either side of the Interconnect elements icon. As we drill down on the switch, we see what devices and systems are directly attached to it. Figure 14-17 shows five hosts, the FAStT device, and the IBM switch in the middle. Figure 14-17 SAN topology From here, we show you several features of Productivity Center for Fabric such as: How to configure the manager and what happens when things go wrong Properties of a host with the Productivity Center for Fabric agent installed How to configure SNMP agents 718 IBM TotalStorage Productivity Center: Getting Started
  • 739. We begin by showing what happens when things go wrong. Figure 14-18 shows that the FAStT disk system has a redundant connection. Let’s see what happens when one connection goes down. Figure 14-18 FAStT dual connections Chapter 14. Using TotalStorage Productivity Center for Fabric 719
  • 740. In Figure 14-19 you notice on the left, that all of the parent icons have turned yellow. This indicates that something has happened in your SAN environment. You can then drill down, following the yellow trail until you find the problem. Here we can see that one of the connections to the FAStT disk system has gone down. Figure 14-19 Failed resource This gives an administrator a place to start looking. After they determine what the problem is, they can take corrective action. The FAStT icon has turned Red, not because it has failed, but so you can see that it is affected. In our case, we lost access to one of the controllers of the FAStT, because it was the only path to that controller. If you right-click the FAStT icon and then select Acknowledge, it changes back to Green if the device itself is OK. The path to the icon still remains Yellow. When the problem is corrected, the topology is updated to reflect the resolution. 720 IBM TotalStorage Productivity Center: Getting Started
  • 741. Now let us see the kind of information that we can view from a host that has a Productivity Center for Fabric agent installed on it. You select the required host, and then click SAN →SAN Properties. A Properties window (Figure 14-20) opens. It shows such information as IP address, operating system, host bus adapter type driver versions, and firmware levels. Figure 14-20 Properties of host GALLIUM Chapter 14. Using TotalStorage Productivity Center for Fabric 721
  • 742. When you click the Connection tab on the left, you see the port on the switch to which the specific host is connected as shown in Figure 14-21. Figure 14-21 GALLIUM connection Now that you have seen the agents and where you can define them, let’s look at the manager configuration. The manager configuration is simple and enables you set the polling intervals. Figure 9-46 on page 349 shows the polling setup, in which you specify how often you want your agents to poll the SAN. You can set this to minutes, hours, days, weeks, which days you want to poll on, and the exact time. Or you can poll manually by clicking the Poll Now button. The Clear History button changes the state of an object that previously had a problem but is back up. The state appears as yellow, but the Clear History button changes it back to normal or green. 722 IBM TotalStorage Productivity Center: Getting Started
  • 743. 14.2.4 Launching element managers Productivity Center for Fabric also has the ability to launch element managers. By element manager, we are referring to applications that vendors use to configure their hardware. Figure 14-22 shows the Productivity Center for Fabric launching the element manager for the IBM 2109 Fibre Channel switch. Figure 14-22 Launching an element manager Chapter 14. Using TotalStorage Productivity Center for Fabric 723
  • 744. Figure 14-23 shows the management tool for the IBM 2109 after being launched from Productivity Center for Fabric. Figure 14-23 Switch management 724 IBM TotalStorage Productivity Center: Getting Started
  • 745. 14.2.5 Explore view Along with the Productivity Center for Fabric Topology View, you can view your SAN environment with a Windows Explore type view. By clicking the Submap Explorer button in the center of the toolbar, you see a view like the example in Figure 14-24. The Navigation Tree button shows a flowchart-type view of the Productivity Center for Fabric views. Figure 14-24 Explorer view 14.3 Topology views The standard IP-based IBM Tivoli NetView root map contains IP Internet and SmartSets submaps. Productivity Center for Fabric adds a third submap, called Storage Area Network, to allow the navigation through your discovered SAN. Figure 14-25 shows the NetView root map with the addition of Productivity Center for Fabric. Chapter 14. Using TotalStorage Productivity Center for Fabric 725
  • 746. Figure 14-25 IBM Tivoli NetView root map The Storage Area Network submap (shown in Figure 14-26) displays an icon for each available topology view. There will be a SAN view icon for each discovered SAN fabric (three in our case), a Device Centric View icon, and a Host Centric View icon. Figure 14-26 Storage Area Network submap 726 IBM TotalStorage Productivity Center: Getting Started
  • 747. You can see in this figure that we had three fabrics. They are named Fabric1, Fabric3, and Fabric4, since we have changed their label using SAN → SAN Properties as explained in “Properties” on page 736. Figure 14-27 shows the complete list of views available. In the following sections we will describe the content of each view. Topology views Tivoli NetView root map Storage Area Network SAN view Device Centric view Host Centric view Devices Topology view Zone view Hosts (storage servers) Switches Interconnect elements Zones LUNs Platform Elements Elements (switches) Elements Host Filesystems Platform Volumes Figure 14-27 Topology views 14.3.1 SAN view The SAN view allows you to see the SAN topology at the fabric level. In this case we clicked the Fabric1 icon shown in Figure 14-26 on page 726. The display in Figure 14-28 appears, giving access to two further submaps: Topology view Zone view Figure 14-28 Storage Area Network view Chapter 14. Using TotalStorage Productivity Center for Fabric 727
  • 748. Topology view The topology view is used to display all elements of the fabric including switches, hosts, devices, and interconnects. As shown on Figure 14-29, this particular fabric has two switches. Figure 14-29 Topology view Now, you can click a switch icon to display all the hosts and devices connected to the selected switch (Figure 14-30). Figure 14-30 Switch submap 728 IBM TotalStorage Productivity Center: Getting Started
  • 749. On the Topology View (shown in Figure 14-29 on page 728) you can also click Interconnect Elements to display information about all the switches in that SAN (Figure 14-31). Figure 14-31 Interconnect submap The switch submap (Figure 14-30), shows that six devices are connected to switch ITSOSW1. Each connection line represents a logical connection. Click a connection bar twice to display the exact number of physical connections (Figure 14-32). We now see that, for this example, SOL-E is connected to two ports on the switch ITSOSW1. Figure 14-32 Physical connections view Chapter 14. Using TotalStorage Productivity Center for Fabric 729
  • 750. When the connection represents only one physical connection (or, if we click one of the two connections shown in Figure 14-32), NetView displays its properties panel (Figure 14-33). Figure 14-33 NetView properties panel Zone view The Zone view submap displays all zones defined in the SAN fabric. Our configuration contains two zones called FASTT and TSM (Figure 14-34). Figure 14-34 Zone view submap 730 IBM TotalStorage Productivity Center: Getting Started
  • 751. Click twice on the FASTT icon to see all the elements included in the FASTT zone (Figure 14-35). Figure 14-35 FASTT zone In lab1, the FASTT zone contains five hosts and one storage server. We have installed TotalStorage Productivity Center for Fabric Agents on the four hosts that are labelled with their correct hostname (BRAZIL, GALLIUM, SICILY and SOL-E). For the fifth host, LEAD, we have not installed the agent. However, it is discovered since it is connected to the switch. Productivity Center for Fabric displays it as a host device, and not as an unknown device, because the QLogic HBA drivers installed on LEAD support RNID. This RNID support gives the ability for the switch to get additional information, including the device type (shown by the icon displayed), and the WWN. The disk subsystem is shown with a question mark because the FAStT700 was not yet fully supported (with the level of code available at the time of writing) and Productivity Center for Fabric was not able to determine all the properties from the information returned by the inband and outband agents. 14.3.2 Device Centric View You may have several SAN fabrics with multiple storage servers. The Device Centric View (accessed from the Storage Area Network view, as shown in Figure 14-26 on page 726), displays the storage devices connected to your SANs and their relationship to the hosts. This is a logical view as the connection elements are not shown. Because of this, you may prefer to see this information using the NetView Explorer interface as shown in Figure 14-36. This has the advantage of automatically displaying all the lower level items for Device Centric View listed in Example 14-27 on page 727 simultaneously, such as LUNs and Host. Chapter 14. Using TotalStorage Productivity Center for Fabric 731
  • 752. Figure 14-36 Device Centric View In the preceding figure, we can see the twelve defined LUNs and the host to which they have been allocated. The dependency tree is not retrieved from the FAStT server but is consolidated from the information retrieved from the managed hosts. Therefore, the filesystems are not displayed as they can be spread on several LUNs and this information is transparent to the host. Note that the information is also available for the MSS storage server, the other disk storage device in our SAN. 14.3.3 Host Centric View The Host Centric View (accessed from the Storage Area Network view, as shown in Figure 14-26 on page 726) displays all the hosts in the SAN and their related local and SAN-attached storage devices. This is a logical view that does not show the interconnect elements (and runs across the fabrics). Since this is also a logical view, like the Device Centric View, the NetView Explorer presents a more comprehensive display (Figure 14-37). 732 IBM TotalStorage Productivity Center: Getting Started
  • 753. Figure 14-37 Host Centric View for Lab 1 We see our four hosts and all their local filesystems whether they are locally or SAN-attached. NFS-mounted filesystems and shared directories are not displayed. Since no agent is running on LEAD, it is not shown in this view. 14.3.4 iSCSI discovery For this environment we will reference SAN Lab 2 (“Lab 2 environment” on page 752). Starting discovery You can discover and manage devices that use the iSCSI storage networking protocol through Productivity Center for Fabric using IBM Tivoli NetView. Before discovery, SNMP and the iSCSI MIBs must be enabled on the iSCSI device, the Tivoli NetView IP Discovery must be enabled. See 14.11, “Real-time reporting” on page 786 for enabling IP discovery. The IBM Tivoli NetView nvsniffer daemon will discover the iSCSI devices. Depending on the iSCSI operation chosen, a corresponding iSCSI SmartSet will be created under the IBM Tivoli NetView SmartSets icon. By default, the nvsniffer utility runs every 60 minutes. Once nvsniffer discovers a iSCSI device, it creates an iSCSI SmartSet located on the NetView Topology map at the root level. The user can select what type of iSCSI device is discovered. From the menu bar, click Tools → iSCSI Operations menu and select Discover All iSCSI Devices, Discover All iSCSI Initiators or Discover All iSCSI Targets, as shown in Figure 14-38. For more details about iSCSI, refer to 14.12, “Productivity Center for Fabric and iSCSI” on page 810. Chapter 14. Using TotalStorage Productivity Center for Fabric 733
  • 754. Figure 14-38 iSCSI discovery Double-click the iSCSI SmartSet icon to display all iSCSI devices. Once all iSCSI devices are discovered by NetView, the iSCSI SmartSet can be managed from a high level. Status for iSCSI devices is propagated to the higher level, as described in 14.1.9, “Status propagation” on page 711. If you detect a problem, drill to the SmartSet icon and continue drilling through the iSCSI icon to determine what iSCSI device is having the problem. Figure 14-39 shows an iSCSI SmartSet. Figure 14-39 iSCSI SmartSet 14.3.5 MDS 9000 discovery The Cisco MDS 9000 is a family of intelligent multilayer directors and fabric switches that have such features as: virtual SANs (VSANs), advanced security, sophisticated debug analysis tools and an element manager for SAN management. Productivity Center for Fabric has enhanced compatibility for the Cisco MDS 9000 Series switch. Tivoli NetView displays the port numbers in a format of SSPP, where SS is the slot number and PP is the port number. The Launch Application menu item is available for the Cisco switch. When the Launch Application is selected, the Cisco Fabric Manager application is started. For more details, see 14.7.1, “Cisco MDS 9000 discovery” on page 745. 734 IBM TotalStorage Productivity Center: Getting Started
  • 755. 14.4 SAN menu options In this section we describe some of the menu options contained under the SAN pull-down menu option for Productivity Center for Fabric. 14.4.1 SAN Properties As shown in Figure 14-40, select an object and use SAN → SAN Properties to display the properties gathered by Productivity Center for Fabric. In this case we are selecting a particular filesystem (the root filesystem) from the Agent SOL-E. Figure 14-40 SAN Properties menu This will display a SAN Properties window that is divided into two panes. The left pane always contains Properties, and may also contain Connection and Sensors/Events, depending on the type of object being displayed. The right pane contains the details of the object. These are some of the device types that give information in the SAN Properties menu: Disk drive Hdisk Host file system LUN Log volume OS Physical volume Port SAN Chapter 14. Using TotalStorage Productivity Center for Fabric 735
  • 756. Switch System Tape drive Volume group Zone Properties The first grouping item is named Properties and contains generic information about the selected device. The information that is displayed depends on the object type. This section shows at least the following information: Label: The label of the object as it is displayed by Productivity Center for Fabric. If you update this field, this change will be kept over all discoveries. Icon: The symbol representing the device type. If the object is of an unknown type, this field will be in read-write mode and you will be able to select the correct symbol. Name: The reported name of the device. Figure 14-41 shows the Properties section for a filesystem. You can see that it displays the filesystem name and type, the mount point, and both the total and available space. Since a filesystem is not related to a port connection and also does not return sensor events, only the Properties section is available. Figure 14-41 Productivity Center for Fabric Properties — Filesystem Figure 14-42 shows the Properties section for a host. You can see that it displays the hostname, the IP address, the hardware type, and information about the HBA. Since the host does not give back sensor related events, only the Properties and Connections sections are available. 736 IBM TotalStorage Productivity Center: Getting Started
  • 757. Figure 14-42 Productivity Center for Fabric Properties — Host Figure 14-43 shows the Properties section for a switch. You can see that it displays fields including the name, the IP address, and the WWN. The switch is a connection device and sends back information about the events and the sensors. Therefore, all three item groups are available (Properties, Connections, and Sensors/Events). Figure 14-43 Productivity Center for Fabric Properties — Switch Chapter 14. Using TotalStorage Productivity Center for Fabric 737
  • 758. Figure 14-44 shows the properties for an unknown device. Here you can change the icon to a predefined one by using the pull-down field Icon. You can also change the label of a device even if the device is of a known type. Figure 14-44 Changing icon and name of a device Connection The second grouping item, Connections shows all ports in use for the device. This section appears only when it is appropriate to the device displayed — switch or host. On Figure 14-45, we see the Connection tab for one switch where six ports are used. Port 0 is used for the Inter-Switch Link (ISL) to switch ITSOSW2. This is a very useful display, as it shows which device is connected on each switch port. Figure 14-45 Connection information 738 IBM TotalStorage Productivity Center: Getting Started
  • 759. Sensors/Events The third grouping item, Sensors/Events, is shown in Figure 14-46. It shows the sensors status and the device events for a switch. It may include information about fans, batteries, power supplies, transmitter, enclosure, board, and others. Figure 14-46 Sensors/Events information 14.5 Application launch Many SAN devices have vendor-provided management applications. Productivity Center for Fabric provides a launch facility for many of these. Chapter 14. Using TotalStorage Productivity Center for Fabric 739
  • 760. 14.5.1 Native support For some supported devices, Productivity Center for Fabric will automatically discover and launch the device-related administration tool. To launch, select the device and then click SAN → Launch Application. This will launch the Web application associated with the device. In our case, it launches the Brocade switch management Web interface for the switch ITSOSW1, shown in Figure 14-47. Figure 14-47 Brocade switch management application 14.5.2 NetView support for Web interfaces For devices that have not identified their management application, IBM Tivoli NetView allows you to manually configure the launch of a Web interface for any application, by doing the following actions: Right-click the device and select Object Properties from the context-sensitive menu. On the dialog box, select the Other tab (shown in Figure 14-48). Select LANMAN from the pull-down menu. Check isHTTPManaged. Enter the URL of the management application in the Management URL field. Click Verify, Apply, OK. 740 IBM TotalStorage Productivity Center: Getting Started
  • 761. Figure 14-48 NetView objects properties — Other tab After this, you can launch the Web application by right-clicking the object and then selecting Management Page, as shown in Figure 14-49. Figure 14-49 Launch of the management page Important: This definition will be lost if your device is removed from the SAN and subsequently rediscovered, since it will be a new object for NetView. Chapter 14. Using TotalStorage Productivity Center for Fabric 741
  • 762. 14.5.3 Launching TotalStorage Productivity Center for Data The TotalStorage Productivity Center for Data interface can be started by using TotalStorage Productivity Center for Fabric NetView console. To do this, select SAN → Storage Resource Manager, as shown in Figure 14-50. Figure 14-50 Launch Tivoli Storage Resource Manager The user properties file contains an SRMURL setting that defaults to the fully qualified host name of Tivoli Storage Area Network Manager. This default assumes that both TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Fabric are installed on the same machine. If TotalStorage Productivity Center for Data is installed on a separate machine, you can modify the SRMURL value to specify the host name of the TotalStorage Productivity Center for Data machine. For instructions on how to do this, please refer to the manual IBM Tivoli Storage Area Network Manager User’s Guide, SC23-4698. If the following conditions are true, you can start the TotalStorage Productivity Center for Data graphical interface from the Tivoli NetView console: TotalStorage Productivity Center for Data or the TotalStorage Productivity Center for Data graphical interface is installed on the same machine as TotalStorage Productivity Center for Fabric, or the SRMURL value specifies the hostname of TotalStorage Productivity Center for Data. The TotalStorage Productivity Center for Fabric is currently running. For more information on TotalStorage Productivity Center for Data, see the redbook IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886. 14.5.4 Other menu options For the other options on the SAN pull-down menu: Configure Agents is covered in *“Configuring the outband agents” on page 346 and “Checking inband agents” on page 348. Configure Manager is covered in “Performing an initial poll and setting up the poll interval” on page 349. Set Event Destination is covered in “Configuring SNMP” on page 342 ED/FI Properties and ED/FI Configuration are covered in “Configuration for ED/FI - SAN Error Predictor” on page 818. 742 IBM TotalStorage Productivity Center: Getting Started
  • 763. 14.6 Status cycles Figure 14-51 shows the typical color change status cycles which reflect normal operation as a device goes down and comes up. Table 14-1 on page 709 and Table 14-2 on page 709 list the meanings of the different colors. NEW GREEN Clear History Device down NORMAL GREEN Device down Device up MISSING RED Figure 14-51 IBM Tivoli SAN Manager — normal status cycle If you do not manually use NetView capabilities to change status, the status of a Tivoli SAN Manager object goes from green to red and from red to green. Note that the only difference between an object in the NORMAL/GREEN and NEW/GREEN status is in the Status field under SAN Properties (see Figure 14-42 on page 737 for an example). A new object will have New in the field and a normal object will show Normal. The icon displayed in the topology map will look identical in both cases. You can encounter situations where your device is down for a known reason such as an upgrade or hardware replacement and you don’t want it displayed with a missing/red status. You can use the NetView Unmanage function to set its color as tan to avoid having the yellow or red status reported and propagated in the topology display. See Figure 14-52. Chapter 14. Using TotalStorage Productivity Center for Fabric 743
  • 764. IBM Tivoli SAN Manager status cycle (with Unmanage) NORMAL TAN Manage / Unmanage NORMAL GREEN Device up Device down MISSING RED Clear History Manage / Unmanage MISSING NOT DISCOVERED Clear History / NOT DISPLAYED TAN Figure 14-52 Status cycle using Unmanage function However, when a device is unmanaged and you do a SAN → Configure Manager → Clear History to remove historical data, the missing device will be removed from the Productivity Center for Fabric database and will no longer be reported until it is up back with a new/green status. If you have changed the label of the device, and it is re-discovered after a Clear History, it will reappear with the default generated name, as this information is not saved. See Figure 14-53. IBM Tivoli SAN Manager status cycle (with Acknowledge) NORMAL GREEN Device down MISSING Device up RED Ack/Unack MISSING DARK GREEN Figure 14-53 Status cycle using Acknowledge function You can use the NetView Acknowledge function to specify that you have been notified about the problem and that you are currently searching for more information or for a solution. This will set the device’s color as dark green to avoid having the yellow or red status reported and propagated in the topology display. Subsequently, you can use the Unacknowledge function to return in the normal status and colors cycle. When the device becomes available, it will automatically return to the normal reporting cycle. 744 IBM TotalStorage Productivity Center: Getting Started
  • 765. 14.7 Practical cases We have re-created some typical errors that can happen in a production environment to see how and why Productivity Center for Fabric reacts to them. We have also used different configurations of the inband and outband agents and correlated the results with the explanations. 14.7.1 Cisco MDS 9000 discovery In this section we discuss the discovery of the Cisco MDS 9509, which is part of the MDS 9000 family. Our MDS 9509 is a multilayer switch/director with a 6 slot configuration. We have one 16 port card and one 32 port card running at 2GB/s. Discovery of the MDS 9509 is performed using inband management. Figure 14-54 is the lab environment used to demonstrate the following discovery. We will call this Lab environment 3. Cisco 9509 Sanan ITSANM Agent Intranet Lochness SAN Manager Sanxc1 Sanxc2 Sanan3 Figure 14-54 Lab environment 3 We first deployed an Productivity Center for Fabric Agent to SANAN. Once the agent was installed, it registered with the Productivity Center for Fabric - LOCHNESS and discovered the CISCO1 (MDS 9509). The topology in Figure 14-55 was displayed after deploying the agent. Note: In order to discover the MDS 9000, at least one Productivity Center for Fabric Agent must be installed on a host attached to the MDS 9000. Outband management is not supported for the MDS 9000. Chapter 14. Using TotalStorage Productivity Center for Fabric 745
  • 766. Figure 14-55 Discovery of MDS 9509 To display the properties of CISCO1, right-click the CISCO1 icon to select it and select SAN → SAN Properties. See Figure 14-56. Figure 14-56 MDS 9509 properties 746 IBM TotalStorage Productivity Center: Getting Started
  • 767. The Connection option (Figure 14-57) displays information about the slots and ports where the hosts SANXC1, SANXC2 and SANXC3 are connected, as well as the status of each port. Figure 14-57 MDS 9509 connections 14.7.2 Removing a connection on a device running an inband agent Next, we removed the FC link between the host SICILY and the switch ITSOSW1. Productivity Center for Fabric does not show that the device is missing but shows that the connection is missing. As the host was running an in-band management agent, the host continues to report its configuration to the manager using the IP network. However, the attached switch sends a trap to the manager to signal the loss of a link. You can use Monitor → Events → All to view the trap received by NetView. Double-click the trap coming from ITSOSW1 to see details about the trap as shown in Figure 14-58. Figure 14-58 Trap received by NetView We see that ITSOSW1 sent a trap to signal that FCPortIndex4 (port number 3) has a status of 2 (which means Offline). Chapter 14. Using TotalStorage Productivity Center for Fabric 747
  • 768. The correlation between the inband information and the trap received is then made correctly and only the connection is shown as missing. You can see in Figure 14-59 that the connection line has turned red, using the colors referenced in Table 14-1 on page 709. Figure 14-59 Connection lost We then restored the connection, and following the status cycle explained in Figure 14-51 on page 743, the connections returned to normal (Figure 14-60). Figure 14-60 Connection restored 748 IBM TotalStorage Productivity Center: Getting Started
  • 769. Next, we removed one out of the two connections from the host TUNGSTEN to ITSOSW3. One link is lost, so the connection is now shown as suspect (yellow) – Figure 14-61. Figure 14-61 Marginal connection NetView follows its status propagation rules in Table 14-4 on page 711. This connection links to a submap with the two physical connections. The bottom physical connection is missing (red), and the other (top) one is normal (black), resulting is propagated status of (yellow) on the parent map (left hand side). See Figure 14-62. Figure 14-62 Dual physical connections with different status Chapter 14. Using TotalStorage Productivity Center for Fabric 749
  • 770. 14.7.3 Removing a connection on a device not running an agent A device with no agent is only detected via its connection to the switch. If the connection is broken, the host cannot be discovered. In this case, we unplugged the FC link between the host LEAD and the switch ITSOSW2. LEAD is not running either an inband or an outband agent — as we can see using SAN → Agents configuration, shown in Figure 14-63. Figure 14-63 Agent configuration After removing the link on LEAD and we received a standard Windows missing device popup (Figure 14-64) indicating it could no longer see its FC-attached disk device. Figure 14-64 Unsafe removal of Device 750 IBM TotalStorage Productivity Center: Getting Started
  • 771. Productivity Center for Fabric shows the device as Missing (the icon changes to red — see the color status listing in Table 14-1 on page 709) — as it is no longer able to determine the status of the device (see Figure 14-65). Figure 14-65 Connection lost on a unmanaged host In Figure 14-66, the host is Unmanaged (tan) status since we decided to unmanage it. Figure 14-66 Unmanaged host Chapter 14. Using TotalStorage Productivity Center for Fabric 751
  • 772. We finally select SAN → Configure Manager → Clear History (See Figure 14-67). Figure 14-67 Clear History After the next discovery, as explained in Figure 14-52 on page 744, the host is no longer displayed (Figure 14-68), since it has been removed from the Productivity Center for Fabric database. Figure 14-68 NetView unmanaged host not discovered 14.7.4 Powering off a switch In this test we power off a SAN switch and observe the results. Lab 2 environment For demonstration purposes in the following sections, this lab is referenced as Lab 2. The configuration consists of: Two IBM 2109-S08 (ITSOSW1 and ITSOSW2) switches with firmware V2.6.0g 752 IBM TotalStorage Productivity Center: Getting Started
  • 773. One IBM 2109-S16 (ITSOSW3) switch with firmware V2.6.0g One IBM 2109-F16 (ITSOSW4) switch with firmware V3.0.2 One IBM 2107-G07 SAN Data Gateway Two pSeries 620 (BANDA, KODIAK) running AIX 5.1.1 with: – Two IBM 6228 cards One IBM pSeries F50 (BRAZIL) running AIX 5.1.1ML4 with: – One IBM 6227 card with firmware 02903291 – One IBM 6228 card with firmware 02C03891 One HP Server running HP-UX 11.0 – One FC HBA Four Intel® servers (TONGA, PALAU, WISLA, LOCHNESS) Two Intel servers (DIOMEDE, SENEGAL) with: – Two QLogic QLA2200 card with firmware 8.1.5.12 One IBM xSeries 5500 (BONNIE) with: – Two QLogic QLA2300 card with firmware 8.1.5.12 One IBM Ultrium Scalable Tape Library (3583) One IBM TotalStorage FAStT700 storage server Figure 14-69 shows the SAN topology of our lab environment. Senegal - Win2k Srv sp3 LTO 3583 lochness Win2k Srv sp3 SAN Manager SDG banda - AIX 5.1 PO W ERFAU L DA T AL AR M T A Easter - HPUX tonga - Win2k Srv sp3 11 sw4 wisla - Win2k Srv sp3 bonnie - Win2k Srv sp3 sw2 palau- Win2k Srv sp3 sw1 fastT700 Kodiak - AIX 5.1 brazil - AIX 5.1 sw3 iSCSI clyde - Win2k Srv sp3 diomede- Win2k Srv sp3 NAS 200 Figure 14-69 SAN lab - environment 2 Chapter 14. Using TotalStorage Productivity Center for Fabric 753
  • 774. We have powered off the switch ITSOSW4, with managed host SENEGAL enabled. The topology map reflects this as shown in Figure 14-70. The switch and all connections change to red. Figure 14-70 Switch down Lab 2 The agent running on the managed host (SENEGAL) has scanners listening to the HBAs located in the host. Those HBAs detect that the attached device, ITSOSW4, is not active since there is no signal from ITSOSW4. The information is retrieved by the scanners and reported back to the manager through the standard TCP/IP connection. Since the switch is not active, the hosts can no longer access the storage servers. The active agent (SENEGAL) sends the information to the manager which triggers a new discovery. Since the switch does no longer responds to outband management, Productivity Center for Fabric will correlate all the information and as a result, the connections between the managed hosts and the switch, and the switch itself, are shown as red/missing. The storage server is shown as green/normal because of a second Fibre Channel connection to ITSOSW2. ITSOSW2 is also green/normal because of the outband management being performed on this switch. The active agent host is still reported as normal/green as it sends its information to the Manager through the TCP/IP network. Therefore the Manager can determine that only the agent’s switch connections, not the host itself, is down. 754 IBM TotalStorage Productivity Center: Getting Started
  • 775. Now, we powered the switch on again. At startup, the switch sends a trap to the manager. This trap will cause the manager to ask for a new discovery. The result is shown in Figure 14-71. Figure 14-71 Switch up Lab 2 Now, following the status propagation detailed in 14.6, “Status cycles” on page 743, all the devices are green/normal. Chapter 14. Using TotalStorage Productivity Center for Fabric 755
  • 776. 14.7.5 Running discovery on a RNID-compatible device When you define a host for inband management, the topology scanner will launch inband queries to all attached HBAs. The remote HBAs, if they support RNID, will send back information such as device type. On switch ITSOSW2 is a Windows host, CLYDE, with a QLogic card at the requested driver level. There is no agent installed on this host. We see however that it is discovered as a host rather than as an Unknown device, as shown in Figure 14-72, because of the HBA RNID support. Figure 14-72 RNID discovered host You can see under the SAN Properties window, Figure 14-73, that the RNID support only provides the device type (Host) and the WWN. Compare this with the SAN Properties window for a managed host, shown in Figure 14-42 on page 737. Figure 14-73 RNID discovered host properties 756 IBM TotalStorage Productivity Center: Getting Started
  • 777. To have a more explicit map, we put CLYDE in the Label field (using the method shown in Figure 14-44) and the host is now displayed with its new label. Figure 14-74 RNID host with changed label Chapter 14. Using TotalStorage Productivity Center for Fabric 757
  • 778. 14.7.6 Outband agents only To see what happens if there are only outband agents, that is, with no Productivity Center for Fabric agents running, we stopped all the running inband agents, cleared the Productivity Center for Fabric configuration, by using SAN → Configure Agent → Remove button, and then re-configured the outband agents on the switches ITSOSW1, ITSOSW2, and ITSOSW4 as shown in Figure 14-75. Figure 14-75 Only outband agents When configuring the agents, we also used the Advanced button to enter the administrator userid and password for the switches. This information is needed by the scanners to obtain administrative information such as zoning for Brocade switches. 758 IBM TotalStorage Productivity Center: Getting Started
  • 779. Productivity Center for Fabric discovers the topology by scanning the three registered switches. This is shown in Figure 14-76. The information about the attached devices is limited to the WWN of the device since this information is retrieved from the switch and there is no other inband management. Note the ‘-’ signs next to Device Centric and Host Centric Views — this information is retrieved only by the inband agent so is not available to us here. Figure 14-76 Explorer view with only outband agents Figure 14-77 shows the information retrieved from the switches (SAN Properties). Figure 14-77 Switch information retrieved using outband agents Chapter 14. Using TotalStorage Productivity Center for Fabric 759
  • 780. 14.7.7 Inband agents only For this practical case, we first unplugged all Fibre Channel connections from all agents and we removed all the outband agents from the configuration using SAN → Configure Agents → Remove tab. We then forced a new poll. As expected, the agents returned only information about the node and the local filesystems, shown in Figure 14-78. Note the ‘-’ sign in front of /data01 for host SICILY. The filesystem is defined but not mounted, as the Fibre Connections are not active. Figure 14-78 Inband agents only without SAN connections We reconnected the Fibre Channel connections to all agents into the switch and forced a new polling. We now see that all agents reported information about their filesystems. Since the agents are connected to a switch, the inband agents will retrieve information from it using inband management. That explains why we see all the devices including those without agents installed. Figure 14-79 shows that: Our four inband agents (BRAZIL, GALLIUM, SICILY, SOL-E) are recognized. The two switches ITSOSW1 and ITSOSw2 are found, since agents are connected to them. Device 1000006045161FF5 is displayed since it is connected to the switch ITSOSW1. The device type is Unknown, as there is no inband nor outband agent on this device. 760 IBM TotalStorage Productivity Center: Getting Started
  • 781. Figure 14-79 Inband agents only with SAN connections We can also display SAN Properties as shown in Figure 14-80. Figure 14-80 Switches sensor information We now have no zoning information available since this is retrieved from the switch outband Agent for the 2109 switch. This is indicated by the — sign next to Zone View on Figure 14-79. Chapter 14. Using TotalStorage Productivity Center for Fabric 761
  • 782. 14.7.8 Disk devices discovery The topology scanner will launch inband queries to all attached HBAs. The Attribute scanner will then do a SCSI request to get attribute information about the remote devices. Due to LUN masking, the storage server will deny all requests if there are no LUNs defined for the querying host. Figure 14-81 shows how our SAN topology is mapped when there is an IBM MSS storage server but with no LUNs defined or accessible for the hosts in the same fabric. The storage server is shown as an Unknown device because the inband agents were not allowed to do SCSI requests to the storage servers as they had no assigned LUNs. Figure 14-81 Discovered SAN with no LUNS defined on the storage server 762 IBM TotalStorage Productivity Center: Getting Started
  • 783. Figure 14-82 shows that the host CRETE is not included in the MSS zone (we have enabled the outband agent for the switch in order to display zone information). This zone includes TUNGSTEN, which has no LUNs defined on the MSS. Figure 14-82 MSS zoning display We changed the MSS zone to include the CRETE server. We run cfgmgr on CRETE so that it scans its configuration and finds the disk located on the MSS as shown in Example 14-1. Example 14-1 cfgmgr to discover new disks # lspv hdisk0 00030cbf4a3eae8a rootvg hdisk1 00030cbf49153cab None hdisk2 00030cbf170d8baa datavg hdisk3 00030cbf170d9439 datavg # cfgmgr # lspv hdisk0 00030cbf4a3eae8a rootvg hdisk1 00030cbf49153cab None hdisk2 00030cbf170d8baa datavg hdisk3 00030cbf170d9439 datavg hdisk4 00030cbf8c071018 None Chapter 14. Using TotalStorage Productivity Center for Fabric 763
  • 784. Now, the agent on CRETE is able to run SCSI commands on the MSS and discovers that it is a storage server. Productivity Center for Fabric maps it correctly in Figure 14-83. Figure 14-83 MSS zone with CRETE and recognized storage server 14.7.9 Well placed agent strategy The placement of inband and outband agents will determine the information displayed: For a topology map, you need to define inband and outband agents on some selected servers and switches in order to discover all your topology. Switch zoning and LUN masking may restrict access to some devices. For a complete topology map, including correct device icons, you need to define inband and outband on all servers and switches, except on those supporting RNID. For information on zones, you need to define the switches as outband agents and set the user ID and password on the Advanced properties. For a complete Device Centric and Host Centric views, you need to place inband agents on all servers you want to be displayed. Before implementing inband and outband agents, you should have a clear idea of your environment and the information you want to collect. This will help you to select the agents and may minimize overhead caused by inband and outband agents. In our configuration, we decided to place one agent on GALLIUM which is connected to the two fabrics and has LUNs assigned on the FAStT storage server (Figure 14-84). 764 IBM TotalStorage Productivity Center: Getting Started
  • 785. Figure 14-84 “Well-placed” agent configuration The agent will use inband management to: Query the directly attached devices. Query the name server of the switches to get the list of other attached devices. Launch inband management to other devices to get their WWN and device type (for RNID compatible supported drivers). Launch SCSI request to get LUN information from storage servers. You can see in Figure 14-85 that the agent on GALLIUM has returned information on: Directly attached switches (ITSOSW1 and ITSOSW4) Devices attached to those switches (if they are in the same zones) LUNs defined on the FAStT for this server Its own filesystems Because of the other hosts, only CLYDE runs with RNID compatible drivers, all other devices — excluding switches and FAStT storage server — are displayed with an unknown device icon. However, we have shown how we can get a complete map of our SAN by deploying just one inband agent. Chapter 14. Using TotalStorage Productivity Center for Fabric 765
  • 786. Figure 14-85 Discovery process with one well-placed agent 14.8 Netview In this section we describe how to use the NetView program’s predefined performance applications and how to create your own applications to monitor the Storage Area Network performance. The NetView program helps you manage performance by providing several ways to track and collect Fibre Channel MIB objects. You can use performance information in any of the following ways: Monitoring the network for signs of potential problems Resolving network problems Collecting information for trend analysis Allocating network resources Planning future resource acquisition The data collected by the NetView program is based on the values of MIB objects. The NetView program provides applications that display performance information: NetView Graph displays MIB object values in graphs. Other NetView tools display MIB object values in tables or forms. 766 IBM TotalStorage Productivity Center: Getting Started
  • 787. 14.8.1 Reporting overview The NetView MIB Tool Builder enables you to create applications that collect, display, and save real-time MIB data. The MIB Data Collector provides a way to collect and analyze historical MIB data over long periods of time to give you a more complete picture of your network’s performance. We will explain the SNMP concepts and standards, demonstrate the creation of Data Collections and the use of the MIB Tool Builder as it applies to SAN network management. Figure 14-86 lists the topics we cover in this overview section. NetView Reporting Overview Understanding SNMP and MIBs Configuring MIBs (copying and loading) IBM 2109 NetView MIB Data Collector MIB Tool Builder NetView Graphing tool Figure 14-86 Overview 14.8.2 SNMP and MIBs The Simple Network Management Protocol (SNMP) has become the de facto standard for internet work (TCP/IP) management. Because it is a simple solution, requiring little code to implement, vendors can easily build SNMP agents for their products. SNMP is extensible, allowing vendors to easily add network management functions to their existing products. SNMP also separates the management architecture from the architecture of the hardware devices, which broadens the base of multi vendor support. SNMP is widely implemented and available today. SNMP network management system contains two primary elements: Manager — This is the console through which the network administrator performs network management functions. Agents — These are the entities that interface to the actual device being managed. Switches and directors are examples of managed devices that contain managed objects Important: In our configuration, the SNMP manager is NetView and the SNMP agents are IBM 2109 Fibre Channel Switches. These objects are arranged in what is known as the Management Information Base (MIB). SNMP allows managers and agents to communicate for the purpose of accessing these objects. Figure 14-87 provides an overview of the SNMP architecture. Chapter 14. Using TotalStorage Productivity Center for Fabric 767
  • 788. SNMP architecture iSCSI MIB Application Server iSCSI Initiator SNMP agent IP Storage Ethernet iSCSI Target Desktop iSCSI Initiator 2109 Fibre Channel switch SNMP agent FC switch MIB FA/FC MIB FE MIB Tivoli SAN Manager “NetView” Figure 14-87 SNMP architecture overview A typical SNMP manager performs the following tasks: Queries agents Gets responses from agents Sets variables in agents Acknowledges asynchronous events from agents A typical SNMP agent performs the following tasks: Stores and retrieves management data as defined by the MIB Signals an event to the manager MIBs supported by NetView NetView supports the following types of MIBs: Standard MIB: All devices that support SNMP are also required to support a standard set of common managed object definitions, of which a MIB is composed. The standard MIB object definitions, MIB-I and MIB-II, enable you to monitor and control SNMP managed devices. Agents contain the intelligence required to access these MIB values. Enterprise-specific MIB: SNMP permits vendors to define MIB extensions, or enterprise-specific MIBs, specifically for controlling their products. These enterprise-specific MIBs must follow certain definition standards, just as other MIBs must, to ensure that the information they contain can be accessed and modified by agents. The NetView program provides the ability to load enterprise-specific MIBs from a MIB description file. By loading a MIB description file containing enterprise-specific MIBs on an SNMP management station, you can monitor and control vendor devices. Note: We are using the Brocade 2.6 enterprise-specific MIBs for SAN network performance reporting and the IBM TotalStorage IP Storage 200i iSCSI MIB 768 IBM TotalStorage Productivity Center: Getting Started
  • 789. MIB tree structure MIB objects are logically organized in a hierarchy called a tree structure. Each MIB object has a name derived from its location in the tree structure. This name, called an object ID, is created by tracing the path from the top of the tree structure, or the root, to the bottom, the object itself. Each place where the path branches is called a node. A node can have both a parent and children. If a node has no children, it is called a leaf node. A leaf node is the actual MIB object. Only leaf nodes return MIB values from agents. The MIB tree structure is shown in Figure 14-88. Note the leaf entry for bcsi which has been added into the tree. For more information regarding SNMP MIB tree structures. See the following Web sites relating to SNMP RFCs. http://silver.he.net/~rrg/snmpworld.htm http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/snmp.htm TOP CCITT (0) JOINT-ISO-CCITT (2) ISO (1) STD (0) REG AUTHORITY (1) ORG (3) MEMBER BODY (2) DOD (6) INTERNET (1) PRIVATE (4) DIRECTORY (1) MGMT (2) EXPERIMENTAL (3) ENTERPRISE (1) MIB (1) RESERVED (0) IBM (2) bcsi (1588) iSCSI Figure 14-88 MIB tree structure 14.9 NetView setup and configuration In this section we provide step by step details for copying and loading the Fibre Channel and iSCSI MIBs into NetView. We then describe the FE MIB and SW MIB in the Brocade 2109 Fibre Channel switch and also describe the FC (Fibre Alliance) MIB in the IBM TotalStorage IP Storage 200i device. Note: The FC (Fibre Alliance) MIB is shipped with most Fibre Channel switch vendors. Brocade Communications provides limited support for the FC MIB. 14.9.1 Advanced Menu In order to enable certain advanced features in NetView, we must first enable the Advanced Menu feature in the Options pull-down menu as shown in Figure 14-89. Shut down and restart NetView for the changes to take effect. Chapter 14. Using TotalStorage Productivity Center for Fabric 769
  • 790. Figure 14-89 Enabling the advanced menu 14.9.2 Copy Brocade MIBs Before MIBs can be loaded into NetView, they must first be copied into the osovsnmp_mibs directory. All vendor specific MIBS are located here. We accessed the Brocade MIBs from the Web site: http://www.brocade.com/support/mibs_rsh/index.jsp We downloaded the MIBs below and copied them to the directory. v2_6trp.mib (Enterprise Specific trap) v2_6sw.mib (Fibre Channel Switch) v2_6fe.mib (Fabric Element) v2_6fa.mib (Fibre Alliance) Note: If you have unloaded all the MIBs in the MIB description file (usrovsnmp_mibs), you must load MIB-I or MIB-II before you can load any enterprise-specific MIBs. These are loaded by default in NetView. In Example 14-2 we show the usrovsnmp_mibs directory listing with our newly added MIBs. Example 14-2 MIB directory Directory of C:usrovsnmp_mibs 04/13/2002 09:33a 81,253 v2_6FA.mib 08/27/2002 02:45p 79,095 v2_6FE.mib 04/13/2002 09:33a 60,139 v2_6SW.mib 04/13/2002 09:33a 5,240 v2_6TRP.mib 4 File(s) 225,727 bytes 770 IBM TotalStorage Productivity Center: Getting Started
  • 791. 0 Dir(s) 6,595,670,016 bytes free C:usrovsnmp_mibs> 14.9.3 Loading MIBs After copying the MIBs to the appropriate directory, they must then be loaded into NetView. IBM 2109 The IBM 2109 comes configured to use the MIB II-private MIB (TRP-MIB), FC Switch MIB (SW-MIB), Fibre Alliance MIB (FA-MIB) and Fabric Element MIB (FE-MIB). By default, the MIBs are not enabled. Here is a description of each MIB and their respective groupings. MIB II-private MIB (v2_6trp.mib or TRP-MIB) The object types in MIB-II are organized into the following groupings: The System Group The Interfaces Group The Address Translation Group The IP Group The ICMP Group The TCP Group The UDP Group The EGP Group The Transmission Group The SNMP Group FC_MGMT (Fibre Alliance) MIB (v2_6fa.mib or FA-MIB) The object types in FA-MIB are organized into the following groupings. Currently Brocade does not write any performance related data into the OIDs for this MIB. Connectivity Trap Registration Revision Number Statistic Set Fabric Element MIB (v2_6fe.mib or FE-MIB) The object types in FE-MIB are organized into these groupings: Configuration Operational Error Accounting Capability FC Switch MIB (v2_6sw.mib or SW-MIB) The object types in SW-MIB are organized into the following groupings: swSystem swFabric swActCfg swFCport swNs swEvent swFwSystem swEndDevice Chapter 14. Using TotalStorage Productivity Center for Fabric 771
  • 792. To enable the MIBs for the IBM/Brocade switch, log into the switch via a telnet session, using an ID with administrator privilege (for example, the default admin ID). We enabled all four of the above MIBS using the snmpmibcapset command. The command can either disable or enable a specific MIB within the switch. Example 14-3 shows output from the snmpmibcapset command. Example 14-3 snmpmibcapset command on IBM 2109 itsosw2:admin> snmpmibcapset The SNMP Mib/Trap Capability has been set to support FE-MIB SW-MIB FA-MIB SW-TRAP FA-TRAP SW-EXTTRAP FA-MIB (yes, y, no, n): [yes] SW-TRAP (yes, y, no, n): [yes] FA-TRAP (yes, y, no, n): [yes] SW-EXTTRAP (yes, y, no, n): [yes] no change itsosw2:admin> NetView The purpose of loading a MIB is to define the MIB objects so the NetView program’s applications can use those MIB definitions. The MIB you are interested in must be loaded on the system where you want to use the MIB Data Collector or MIB Tool Builder. Some vendor’s specific MIBs are already loaded into NetView. Since we want to collect performance MIB objects types for the Brocade 2109 switch, we will load its MIB. On the NetView interface, select Tools → MIB → Loader SNMP V1. This will launch the MIB Loader interface as shown in Figure 14-90. Figure 14-90 MIB loader interface 772 IBM TotalStorage Productivity Center: Getting Started
  • 793. Each MIB that you load adds a subtree to the MIB tree structure. You must load MIBs in order of their interdependencies. We loaded the v2_6TRP.MIB first by clicking Load then selecting the TRP.MIB from the usrovsnmp_mibs directory — see Figure 14-91. Figure 14-91 Select and load TRP.MIB Click Open and the MIB will loaded into NetView. Figure 14-92 shows the MIB loading indicator. Figure 14-92 Loading MIB Chapter 14. Using TotalStorage Productivity Center for Fabric 773
  • 794. We then loaded the v2_6 SW.MIB, v2_6FE.MIB and v2_6FA.MIBs in turn using the same process. You must load the MIBs in order of their interdependencies. A MIB is dependent on another MIB if its highest node is defined in the other MIB. After the MIBs are loaded, we now verify that we are able to traverse the MIB tree and select objects from the enterprise-specific MIB. We used the NetView MIB Browser to traverse the branches of the above MIBs. Click Tools → MIB → Browser SNMP v1. to launch the MIB browser and use the Down Tree button to navigate down through a MIB (see Figure 14-93). Figure 14-93 NetView MIB Browser 14.10 Historical reporting NetView provides a graphical reporting tool that can be used against real-time and historical data. After loading the Brocade (IBM 2109) MIBs into NetView, we demonstrate how to compile historical performance data about the IBM 2109 by using the NetView MIB Data Collector and querying the MIB referred to in 14.9.3, “Loading MIBs” on page 771. This tool enables us to manipulate data in several ways, including: Collect MIB data from the IBM 2109 at regular intervals. Store MIB data about the IBM 2109. Define thresholds for MIB data and generate events when the specified thresholds are exceeded. Setting MIB thresholds enables us to automatically monitor important SAN performance parameters to help report, detect and isolate trends or problems. 774 IBM TotalStorage Productivity Center: Getting Started
  • 795. Brocade 2109 MIBs and MIB objects We now need to understand what MIB objects to collect. The IBM 2109 has four MIBs loaded and enabled, described in 14.9.3, “Loading MIBs” on page 771. We selected the MIB object identifiers in Figure 14-94 because of their importance in managing SAN network performance. SAN network administrators may want to specify other MIB object identifiers to meet their own requirements for performance reporting. You should consult your vendor-specific MIB documentation for details of the objects in the MIB. We will describe how to create a MIB Data Collector for the following object identifiers in the following MIBs, shown in Figure 14-94 and Figure 14-95. FE-MIB Error Group fcFXPortLinkFailures - Number of link failures detected by this FxPort fcFXPortSyncLosses - Number of loss of synchronization detected by the FxPort fcFXPortSigLosses - Number of signal losses detected by the FxPort. Figure 14-94 FE-MIB — Error Group SW-MIB Port Table Group swFcPortTXWords - Number of FC words transmitted by the port swFcPortRXWords - Number of FC words received by the port swFcPortTXFrames - Number of FC frames transmitted by the port swFcPortRXFrames - Number of FC frames received by the port swFcPortTXC2Frames - Number of Class 2 frames received by the port swFcPortTXC3Frames - Number of Class 3 frames received by the port Figure 14-95 SW MIB — Port Table Group 14.10.1 Creating a Data Collection Our first Data Collection will target the MIB object swFCPortTxFrames. The swFCPortTxFrames counts the number of Fibre Channel frames that the port has transmitted. It contains group information about the physical state, operational status, performance and error statistics of each Fibre Channel port on the switch for example, F_Port, E_Port, U_Port, FL_Port. Chapter 14. Using TotalStorage Productivity Center for Fabric 775
  • 796. Figure 14-96 describes the MIB tree where this object identifier resides. The root of the tree, bcsi, stands for Brocade Communication Systems Incorporated. The next several pages describe the step-by-step process for defining a Data Collection on the swFcPortTxFrames MIB object identifier using NetView. bcsi (1588) commDev (2) Fibre channel (1) fcSwitch (1) sw (1) swFCPort (6) swFCPortTable (2) IBM 2109 swFCPortEntry (1) private MIB tree swFCPortTxFrames (13) Figure 14-96 Private MIB tree for bcsi 1. To create the NetView Data Collection, select Tools → MIB → Collect Data from the NetView main menu. The MIB Data Collector interface displays (Figure 14-97). Select New to create a collection. Figure 14-97 MIB Data Collector GUI 776 IBM TotalStorage Productivity Center: Getting Started
  • 797. 2. If creating the first Data Collection, you will also see the pop-up in Figure 14-98 to start the Data Collection daemon. Click Yes to start the SNMPCollect daemon. Figure 14-98 starting the SNMP collect daemon 3. The Data Collection Wizard GUI then displays (Figure 14-99). This is the first step in creating a new Data Collection. By default NetView has navigated down to the Internet branch of the tree (.iso.org.dod.internet). See Figure 14-88 on page 769 for the overall tree structure. Highlight private and click Down Tree to navigate to the private MIB. Figure 14-99 internet branch of MIB tree We have now reached the private branch of the MIB tree (.iso.org.dod.internet.private). See Figure 14-100. Chapter 14. Using TotalStorage Productivity Center for Fabric 777
  • 798. Figure 14-100 Private arm of MIB tree 4. Continue to navigate down the enterprise branch of the tree by clicking Down Tree. Figure 14-101 shows the enterprise branch of the tree (.iso.org.dod.internet.private.enterprise). Figure 14-101 Enterprise branch of MIB tree 778 IBM TotalStorage Productivity Center: Getting Started
  • 799. 5. We reach the bcsi branch of the tree by clicking Down Tree. Figure 14-102 shows the bcsi (Brocade) branch of the tree (.iso.org.dod.internet.private.enterprise.bcsi). Figure 14-102 bcsi branch of MIB tree Chapter 14. Using TotalStorage Productivity Center for Fabric 779
  • 800. 6. We continue to navigate down the tree, using the path shown in Figure 14-96, and, as shown in Figure 14-103 on page 780, eventually reaching: .iso.org.dod.internet.private.enterprise.bcsi.commDev.fibrechannel.fcSwitch.sw.swFCport. swFCPort.swFCPortEnrty.swFCPortTxFrames. Figure 14-103 swFCPortTxFrames MIB object identifier 7. We selected swFCPortTxFrames and clicked OK. We received the following pop-up (Figure 14-104) from the collection wizard. This pop-up occurs because this will be the first node added to this collection. NetView then adds the swFCTxFrames MIB Data Collection definition as a valid data collector entry. Figure 14-104 Adding the nodes 780 IBM TotalStorage Productivity Center: Getting Started
  • 801. This launches the Add Nodes to the Collection Dialog, which is the second step in creating a new Data Collection. See Figure 14-105. Figure 14-105 Add Nodes to the Collection Dialog 8. We proceeded to customize the section Collect MIB Data from fields, using the following steps: a. We entered the switch node name for which we wanted to collect performance data (in this case, ITSOSW2.ALMADEN.IBM.COM) and clicked Add Node. You can add a node either by selecting it on the topology map or typing in the field as the IP address or hostname for the device. Also, you can select multiple devices on the topology map and click Add Selected Nodes from Map. This adds all the selected nodes selected on the topology map to the Collect MIB Data From field. We also added several nodes to the collection by adding one device at a time in the Node field and clicking Add Node. To remove the node, just click the node name in the list and click Remove. b. We then customized the section Set the Polling Properties for these Nodes, using the following steps: i. We changed the Poll Nodes Every field to 5 minutes. This specifies the frequency in which the nodes are polled. Important: Before setting the polling interval, you should have a clear understanding of available and used bandwidth in your network. Shorter polling intervals generate more SNMP data on the network. ii. We checked Store MIB Data. This will store the MIB data that is collected to C:/usr/ov/databases. iii. The Check Threshold if box was checked. This will define the arm threshold. We want to collect data and signal an event each time more than 200 frames are sent on a particular port. Since we checked this box, we will be required to define the trap value and rearm number fields. iv. The option then send Trap Number was configured. We used the default setting, which is the MIB-II enterprise-specific trap. Chapter 14. Using TotalStorage Productivity Center for Fabric 781
  • 802. v. We then configured and rearm When. We specified a rearm value of greater than or equal to 75%. of the arm threshold value. This means that a trap will be generated and sent when the number of TX frames reaches 150. Note that these traps are NetView-specific traps (separate from Productivity Center for Fabric traps) and will therefore be sent to the NetView console. 9. Click OK to create the new Data Collection, shown in Figure 14-106. Select the swFCPortTxFrames Data Collection and click Collect. Figure 14-106 Newly added Data Collection for swFCTxFrames Note: It could take up to 2 minutes before the newly defined Data Collection is being collected by NetView. To verify that data is being captured, navigate to: c:usrovdatabasessnmpcollect. If there are files present, then the Data Collection is functioning properly. 10.Click Close and the Stop and restart Collection dialog is displayed as in Figure 14-107. Click Yes to recycle the snmpcollect daemon. At this point the Data Collection status (Figure 14-106 above) should change from Suspended to To be Collected Figure 14-107 Restart the collection daemon We are now collecting the data swFCTxFrames on ITSOSW2. Depending upon the level of granularity that is required for your reporting needs, you may want to collect data over shorter or longer periods. In our lab we collected every 5 minutes, but you may want to collect data once every hour for a week or once every hour for a month. We will now use the NetView Graph tool to display the data collected as described in 14.10.4, “NetView Graph Utility” on page 784. 782 IBM TotalStorage Productivity Center: Getting Started
  • 803. Note: We followed the same procedure to add the remaining metrics for Data Collection swFCRxFrames, swFCTxErrors, and swFCRxErrors. For demonstration purposes we used a of 50 for an arm threshold and a value of 75% for re-arm. Your values for arm/re-arm may differ from what we used. 14.10.2 Database maintenance You can periodically purge the Data Collection entries by selecting Options → Server Setup, the Files tab page, then select Schedule SNMP Files to Delete from the drop-down list. See Figure 14-108. Select the Purge day at a specific time. Figure 14-108 Purge Data Collection files Important: There are documented steps on how to perform important maintenance of Tivoli NetView. Refer to the IBM Redbook Tivoli NetVIew and Friends, SG24-6019. Chapter 14. Using TotalStorage Productivity Center for Fabric 783
  • 804. 14.10.3 Troubleshooting the Data Collection daemon If you find data is not being collected, ensure that the snmpCollect daemon is running and that there is space available in the collection file system usrovdatabasessnmpcollect. The daemon can stop running if there is no filesystem space. To verify that the daemon is running, type ovstatus snmpcollect from the DOS command prompt (see Example 14-4). Example 14-4 snmpcollect daemon running C:>ovstatus snmpcollect object manager name: snmpcollect behavior: OVs_WELL_BEHAVED state: RUNNING PID: 1536 last message: Initialization complete. exit status: - Done C:> If the snmpcollect daemon is not running, you will see a state value of NOT RUNNING from the ovstatus snmpcollect command as shown in Example 14-5. Example 14-5 snmpcollect daemon stopped C:>ovstatus snmpcollect object manager name: snmpcollect behavior: OVs_WELL_BEHAVED state: NOT RUNNING PID: 1536 last message: Exited due to user request. exit status: - Done C:> The snmpcollect daemon can be started manually. At a command prompt, we typed in ovstart snmpcollect. You will see the output shown in Example 14-6. We then issued an ovstatus snmpcollect for verification, as shown in Example 14-4. Example 14-6 snmpcolllect started C:>ovstart snmpcollect Done C:> Note: If no Data Collections are currently defined to the MIB Data Collector tool, the snmpcollect daemon will not run. 14.10.4 NetView Graph Utility We used the NetView graph utility to display the MIB object data that we collected in 14.10.1, “Creating a Data Collection” on page 775. 784 IBM TotalStorage Productivity Center: Getting Started
  • 805. We used the NetView Graph tool to display the collected data. This provides a convenient way to display numerical performance information on collected data. We now show how to display the collected data from the previous Data Collection that was built for ITSOSW2 (swFCPortTxFrames). We start by single-clicking ITSOSW2 on the NetView topology map (Figure 14-109). Figure 14-109 Select ITSOSW2 Select Tools → MIB → Graph Data to launch the graph utility This will report on the historical data that has been collected on ITSOSW2. After selecting this, NetView takes some time to process the data and present it in the graphical display. The graph build time depends on the amount of data collected. Figure 14-110 shows the progress indicator. Figure 14-110 Building graph After the graph is built, it displays the swFCTxFrames data that was collected (Figure 14-111). Note there are multiple instances of the object ID mapped — that is, swFCPortTxFrames.1, swFCPortTxFrames.2 and so on. In this case they represent the data collected for each port in the switch. Chapter 14. Using TotalStorage Productivity Center for Fabric 785
  • 806. Figure 14-111 Graphing of swFCTxFrames For viewing purposes, we adjusted the x-axis for Time by clicking Edit → Graph Properties in the open graph window. This allowed us to zoom into shorter time periods. See Figure 14-112. Figure 14-112 Graph properties Any MIB object identifier that has been collected using the NetView MIB Data Collector can be graphed using the NetView Graph facility using the above process. 14.11 Real-time reporting In section we introduce the NetView MIB Tool Builder for real-time reporting. Figure 14-113 provides an overview. 786 IBM TotalStorage Productivity Center: Getting Started
  • 807. Describe the MIB Tool Builder Use of the Tool Builder build modify delete Figure 14-113 Real-time reporting — Tool Builder overview Important: Depending on the configuration, some advanced functionality may be initially disabled in NetView under Tivoli SAN Manager. This section requires this functionality to be enabled. To enable all functionality required, in NetView, click Options → Polling and check the Poll All Nodes field. This is shown in Figure 14-114. Figure 14-114 Enabling all functions in NetView 14.11.1 MIB Tool Builder In section we introduce the NetView MIB Tool Builder. The Tool Builder enables you to build, modify, and delete MIB applications. MIB Applications are programs used by NetView to monitor the network. The Tool Builder allows you to build MIB applications without programming. The MIB Application monitors the real-time performance of specific MIB objects on a regular basis and produces output such as forms, tables, or graphs. Chapter 14. Using TotalStorage Productivity Center for Fabric 787
  • 808. We will demonstrate how to build a a MIB application that will query the swFCPortTxFrames MIB object identifier in the SW-MIB. This process can used to query any SNMP enabled device using NetView. With the switch ITSOSW2 selected, we start building the MIB Application by launching the Tool Builder. Select Tools → MIB → Tool Builder → New. The MIB Tool Builder interface is launched — as in Figure 14-115. Click New to create a new Tool Builder entry for collecting data on ITSOSW2. Figure 14-115 MIB tool Builder interface The Tool Builder Wizard Step1 window is displayed (Figure 14-116). We entered FCPortTxFrames in the Title field and clicked in the Tool ID field to auto populate the remaining fields. We clicked Next to continue with the wizard. Figure 14-116 Tool Wizard Step 1 788 IBM TotalStorage Productivity Center: Getting Started
  • 809. The Tool Wizard Step 2 interface displays. You can see our title of FCPortTxFrames has carried over. We are now ready to select the display type. We can choose between Forms, Tables, or Graphs. We will choose Graph and click New as shown in Figure 14-117. Figure 14-117 Tool Wizard Step 2 The NetView MIB Browser is now displayed. We will use the MIB Browser to navigate down to the FCPortTxFrames object identifier. Use the Down Tree button to navigate through the MIB tree. Figure 14-118 shows the path through the SW-MIB error table. Click OK to add the object identifier. SW MIB - Port Table group private... enterprise... bcsi... commDev... fibrechannel... fcSwitch... sw... swFcPort... swFcPortTable... swFCPortTxFrames Figure 14-118 SW-MIB — Port Table Chapter 14. Using TotalStorage Productivity Center for Fabric 789
  • 810. The newly created MIB application is displayed in the Tool Builder Step 2 of 2 window. See Figure 14-119 for the completed MIB Application. Click OK to complete the definition. Figure 14-119 Final step of Tool Wizard Now, the final window for the Tool Builder is displayed. It shows the newly created MIB application in the window, Figure 14-120. Click Close to close the window. The new MIB Application has been successfully created. Figure 14-120 New MIB application — FXPortTXFrames 790 IBM TotalStorage Productivity Center: Getting Started
  • 811. 14.11.2 Displaying real-time data Now that we have a MIB application, we want to collect real-time data from the switch. Select ITSOSW2 from the NetView topology map by single clicking the ITSOSW2 symbol, then select Monitor → Other → FCPortTXFrames. Our MIB application FCPortTXFrames has been added to the menu (shown in Figure 14-121). Figure 14-121 Monitor pull-down menu Clicking on the FCPortTXFrames option, launches a graph utility, shown in Figure 14-122. Figure 14-122 NetView Graph starting The collection of MIB data starts immediately after selecting the swFCPortTXFrames MIB application from the Monitor → Other menu. Figure 14-123 shows the data being collected and displayed for each MIB instance of the ITSOSW2. Chapter 14. Using TotalStorage Productivity Center for Fabric 791
  • 812. Figure 14-123 Graph of FCPortTXFrames The polling interval of the application can be controlled using the Poll Nodes Every field located under Edit → Graph Properties. See Figure 14-124. Figure 14-124 Graph Properties 792 IBM TotalStorage Productivity Center: Getting Started
  • 813. This launches a dialog to specify how often NetView Graph receives real-time data for graphing, shown in Figure 14-125. This determines how often the nodes are asked for data. Figure 14-125 Polling Interval We continued to use the Tool Builder process defined in 14.11.1, “MIB Tool Builder” on page 787 to build additional MIB applications for real-time performance monitoring. We used the following MIB objects: swFcPortTXWords swFcPortRXC2Frames swFCPortRXC3Frames fcFXPortLinkFailures fcFXPortSyncLosses fcFXPortSigLosses Figure 14-126 shows the newly defined MIB Applications as they appear in the Tool Builder. Figure 14-126 Tool Builder with all MIB objects defined Chapter 14. Using TotalStorage Productivity Center for Fabric 793
  • 814. Figure 14-127 shows all the above MIB objects as they appear in the NetView Monitor pull-down menu. Note we have abbreviated the names of the MIB applications listed in the Monitor → Other menu for ease of use. Figure 14-127 All MIB objects in NetView 14.11.3 SmartSets With Productivity Center for Fabric (Tivoli SAN Manager) providing the management of the SAN, we can further extend the management functionality of the SAN from a LAN and iSCSI perspective. NetView SmartSets gives us this ability. This section describes the concept of the NetView SmartSet. See Figure 14-128 below. For an overview, we provide details on how to group and mange your SAN attached resources from an TCP/IP (SNMP) perspective. By default, the iSCSI SmartSet is created by Productivity Center for Fabric when nvsniffer is enabled. SmartSets for iSCSI “initiators” and “targets” can be created using the process described here. What is a SmartSet? Why SmartSets? Defining a SmartSet SmartSets and Data Collections Figure 14-128 SmartSet Overview 794 IBM TotalStorage Productivity